# Compounding studies using a new DOX.

Formal design of experiments methodology (DOX) has been slowly
increasing in usage in the Rubber Industry over the past decade.
Comparatively little has been published on detailed compounding projects
utilizing DOX, particularly in connection with dynamic properties
related to engineering, such as dynamic shear modulus (G') and
tangent delta (loss factor) (refs. 1 and 2).

Since not all properties are thought to relate to levels of compounding materials in a linear fashion, prudent experimental designs will include multiple levels of the control factor ingredients in order to allow fitting of nonlinear models if necessary. Nonlinear models need sufficient degrees of freedom, which imply more experimental runs, to allow generation of the multiple coefficients used to define the model.

Typical designs would include Box-Behnken and Central Composite models. For an experiment with three control factors these models would require 13 and 15 sets of experimental conditions. However, it is always desirable to minimize the number of experimental runs, and the question of whether or not a more limited model can provide sufficient information arises.

The absolute least model to explore the effects of three factors, plus their simple interactions, and also have any check on nonlinearity would be a basic cubic model with a center point. This requires only nine sets of conditions; but with the center point as the single check on nonlinearity the capacity to quantify deviations from a linear model becomes questionable.

Half factorial arrays are frequently used with simple two level experiments to generate valid data from a smaller number of experimental runs. By using two complementary half factorial patterns, one nested within the other, instead of one full factorial it becomes possible to use each factor at four levels instead of just two. In combination with a center point a total of five levels of each factor are used, yet the total number of runs remains at nine. (See figure 1 for an illustration of the experimental volumes.)

It was decided to test out this type of combined array, which is simply described as a nested half factorial (NHF), in connection with a project to evaluate dynamic property control in polychloroprene. The three major ingredients used were a semi-reinforcing particulate, a much smaller highly reinforcing particulate and a petroleum based fluid plasticizer. Since the first ingredient has intrinsically less effect on compound properties, it was used at a concentration much higher than the other two materials; ranges of concentration of the highly reinforcing particulate and plasticizer were identical.

Their full designations are not germane to the evaluation of the NHF array, and they are identified as ingredients A, B and C; levels of use are shown in the formulation in table 1, but for analysis were normalized to -2, -1, 0, 1 and 2.

With the generous co-operation of several companies it was possible to test the compounds in a variety of ways, some very routine, some less often encountered and a few that are not commonly performed. It was anticipated that at least some of the broad selection of test properties would not fit within simple linear models, so a series of regression analyses using several alternate models were employed in order to determine what kind of model might best describe each ingredient versus property relationship.

Experimental

Ten individual lab mill batches were mixed, all based on the formula in table 1. The particular polychloroprene used was a comparatively new type, a peptizable grade (Neoprene GK) selected for its very favorable combination of physical properties and particular suitability for injection molding. The tetraethylthiuram disulfide (ethyl tuads) is the peptizing agent, the octylated diphenyl amine (Agerite Stalite S) is an antidegradant, and the processing aid is a complex mixture (VanFre AP-2). Stearic acid is also an aid to processing, while the two metal oxides constitute the standard cure system for this polymer.

Both actual and normalized levels of the three variable ingredients are shown in table 2 for all nine formulations. The center point was replicated once, which raised the total of mixes to ten. [TABULAR DATA 2 OMITTED]

Batch mixing was done with the two center point compounds as the first and tenth batches, and the others randomized in between. All mixing was performed on a standard model two-roll laboratory mill on the same day using the same lots of every raw material, broken down and blended in the same procedure by the same pair of operators. After each batch was fully mixed, final homogeneity was obtained by taking the batch off a moderately tight mill into a roll and reinserting the roll into the mill longitudinally a total of at least eight times.

Ingredients were weighed out in 0.1 gram increments on a calibrated triple-beam balance. Final batch weights were checked and mixing losses ranged from 1-1.5%.

Basic rheological testing was performed on an oscillating disk rheometer according to ASTM D2084, using a 3 [degrees] arc, small rotor, 100 range and an 18 minute timer at a temperature of 160 [degrees] C. This temperature was then used to mold all test specimens, at a molding time equal to the compound T90 unless otherwise noted.

Durometer readings were taken from compression set buttons, using the procedure specified in ASTM D2240.

Standard test pads were prepared. Ultimate tensile strength, elongation at break, and the 100, 200 and 300% moduli were then measured for each compound. ASTM procedure D412 was used.

Compression set testing by ASTM D395 Method B was also performed at a temperature of 70 [degrees] C for 70 hrs. on specimens molded for T90 plus 5 minutes. All these data are summarized in table 3 [TABULAR DATA 3 OMITTED]

Also shown in table 3 are the less common test data, including: Lupke rebound as described in The Vanderbilt Rubber Handbook (13th ed., pgs. 529 and 530) and a sample size of two; Goodrich heat build up (ASTM D623) under conditions of a room temperature start and 20 minutes of flexing at 1,800 RPM under a 6.6 KG load and a 4.8 mm stroke (sample size of two); flex fatigue as tested by the Monsanto fatigue to failure machine per ASTM D4482, using cam #14 (100% elongation) and a sample size of six; flex fatigue as tested by the Wallace ring fatigue machine, cycling from 25 to 150% tensile strain with a sample size of six; fatigue as tested using fatigue crack propagation measurements on specialized testing equipment (refs. 3-5) (Lord Corp. Research & Development internal procedures, sample size of two); dynamic property determination using a lap shear specimen and specialized testing equipment (refs. 6 and 7) (Lord Corp. Aerospace Products Division internal procedures, sample size of two).

Results and discussion

The respective levels of ingredients A, B, C were chosen with the intent of having the compounds vary from moderately soft to moderately stiff, perhaps in the range of 40-75 durometer (Shore A), while also varying the hysteresis of the compounds to a significant degree. The remaining properties would then change in whatever degree the system dictated, to be measured and correlated with the primary characteristics of the formulae.

Actual measured durometers went from 43 to 76 points, indicating the general degree of reinforcement/plasticization expected of the control variables was in fact largely as anticipated.

Analysis of durometer data through the regression analysis function of a commercial DOX software package (XStat, John Wiley publishers, New York) yielded an excellent fit with a simple linear model. The percentage of the data explained by the model (R-squared) was 99.2%. The model generated for durometer is of the type:

Y = intercept + [Mathematical Expression Omitted] (1) with Y as the response, X representing a given control factor and C the coefficient matched to each significant factor affecting the response. In this work [X.sub.1] is A, [X.sub.2] is B, etc. Interaction models would contain mixed terms with their own coefficients, for example:

Y = intercept + [Mathematical Expression Omitted]

[Mathematical Expression Omitted] (2) but not every model need contain all possible terms. Scientific conservatism dictates using only those terms needed to adequately describe the response of interest.

In table 4 are shown the intercepts and coefficients for whichever model provided a good fit (>95%) for each of the properties listed in table 3. Since the normalized levels of A, B and C were used in the analysis it becomes easy to associate meaning with each coefficient. [TABULAR DATA 4 OMITTED]

For instance, in regard to durometer the intercept was 66.2, the A coefficient was 4.45, the B coefficient was 3.45 and the C coefficient was -3.35. This means the average durometer for the compounds was 66.2, and that increasing ingredient A to the level normalized as 1 increases the durometer by 4.45 points, while going to the highest level of A (which by comparison is normalized as 2) would increase durometer by 8.9 points.

Increasing ingredient B in the same way would raise durometer by 3.45 or 6.9 points (on average), but raising levels of C would drop hardness by 3.35 or 6.7 points. Using the ingredients at their -1 and -2 levels would result in hardness changes of the same magnitude, but reversed in sign.

Clearly the hardest compound would result from using both particulate materials at their highest levels while using the plasticizer at its lowest level. This is scarcely a surprise to any compounder. What is important is that the model predicts that the durometer would then be 88.7 points, and the prediction is statistically based with a high degree of confidence. While the +2A, +2B, -2C formula was not mixed in the experimental series, the -2A, -2B, +2C formula was. According to the model, its predicted durometer is 43.7 points; the observed number was 43, which is a reasonable match.

Calling for durometer numbers including tenths of a point is of course an unrealistically exact measurement for a handheld indentor instrument, but what is important is that a fairly precise understanding of the response of hardness to the three ingredients has been achieved.

The possibility of an interaction between the small and large particle particulates in their reinforcement of the compound had been considered. However, in this case a basic linear model is quite adequate in describing the response within the ranges of ingredient concentrations used, which were deliberately chosen as moderate in width. (Conventional compounding experience indicates that when really wide ingredient ranges are used, e.g. 0-100 pphr of a reinforcing black or 0-50 pphr of a plasticizing oil, responses begin to deviate substantially from linearity.)

When the rheological properties are examined, linear models allow good fit of T2, T90 and maximum torque. As might have been expected, coefficients for the two reinforcing materials are always opposite in sign from that of the plasticizer (see table 4).

The fourth rheological response measured was minimum viscosity, which only got a moderate fit (88.5%) with a linear model. Use of stepwise regression showed that addition of one particular interaction improved the fit substantially (to 95.5%) while still retaining high confidence levels (>95%) for all coefficients. That interaction is between the plasticizer and the highly reinforcing particulate.

Such interaction terms can be reflective of a physical reality, or they may be mathematical constructs. If the physical interaction does not fall within immediately understandable models, such as between a curative and an accelerator, it is usually good practice to test its validity by examining the predictions of the model. This can be done either by actually making a new experimental run, or simply looking at levels of predicted properties.

In this case the model with the interaction term predicts an absolute minimum viscosity of 0.4 units for a compound with A at -2, B at 2 and C at 2. The lowest minimum viscosity observed in the experimental runs was 1.1 units, (which is low for a polychloroprene compound, and achievable in this case through the peptization of the polymer) and that compound had levels of -2A, -2B and +2C. The likelihood of really dropping the viscosity down from 1.1 to 0.4 units by going from the minimum level of B to its maximum while keeping the other two ingredients the same is very low. This makes the interaction term used more probably a mathematical construct than a true indication of a physical interaction.

However, use of the linear model to calculate lowest possible viscosity also yields an unrealistically low number. Since minimum viscosity numbers from a basic ODR are subject to appreciable scatter, and some curvature of the viscosity response is to be expected as it approaches a lower limit, it is possible that these data simply will not allow any precise and accurate linear or simple interaction model to be developed for the low viscosity response. A full response surface experimental design would be needed to generate the type of data that support a quadratic model, which can much better describe highly nonlinear responses.

The same difficulty in having the experimental data form an acceptable basis for developing a model clearly applies to ultimate tensile strength. A linear model only results in a 77% fit, and although a full interaction model provides a 95% fit, the confidence levels for several of the coefficients are not very high. This implies the better fit is simply due to having more tenus in the equation, rather than having actually achieved a more valid model. Thus the normal degree of scatter in tensile strength determinations seems to constitute a serious problem in regard to precise data analysis.

Interestingly enough, ultimate elongation is also often subject to test scatter, but a linear model provides an excellent fit in this case (>99%). As expected, the two particulate materials decrease clongation while the liquid plasticizer increases it.

Tensile modulus at 300% elongation is fairly well fitted by a linear model (95%), with the reinforcing ingredients raising modulus while the plasticizer decreases it. No alternate model provides any significantly better fit, so interactions again appear to not contribute significantly.

Compression set at 70 [degrees] C averaged slightly above 30% for the compounds, and no model gave a fit over 80% using any group of coefficients which all had high confidence levels. The set resistance may be related primarily to the polymer and its cure system, and not be affected enough by the control variables for clear trends to be discerned.

However, Lupke rebound was modeled well (96% fit) using a two element linear model. Both types of particulate decreased rebound, while the plasticizer had almost no effect. Since any form of powder additive to a compound tends to raise hysteresis that part of the model makes perfect sense. Viscous plasticizers have also been known to increase hysteresis, while low viscosity types sometimes depress the glass transition temperature and thereby affect hysteresis; the lack of observed effect here indicates the plasticizer is neither viscous enough nor interferes with glass transition enough to have an impact on the energy dissipation during the mechanism of rebound in this test.

Fatigue testing is well known for its tendency to display both substantial scatter and poor correlation between different types of test procedures. Testing according to two very different but not totally unrelated fatigue tests was arranged. The Wallace fatigue test uses ring specimens die cut from a molded sheet, which are frequently cycled between some minimum strain and a much greater strain. In this case the minimum strain was 25% and the maximum was 150%, since prior experience with this general type of formulation had shown such levels led to flex lives of moderate length.

The Monsanto flex to failure tester also uses die cut specimens, but they are of the dumbbell type and were flexed from 0 to 100% strain. Both tests were run using sample sizes of six, with the median life figure used for analysis, in order to minimize the effects of scatter. It was not expected that the flex data from these two tests would correlate well, and they did not. The correlation coefficient between the two sets of figures was only 0.33, which is fairly low.

However, the separate regressions on the two sets of data did result in two good fits (both about 93% - unusually high fits for fatigue data) using models with some interaction terms (see table 4). The Wallace fatigue life average is more than three times as many cycles as the Monsanto average despite the use of a significantly higher strain range in the Wallace than in the Monsanto. This may be due mainly to the Monsanto specimens being cycled through zero strain while the Wallace specimens were not; for strain crystallizing polymers such as polychloroprene and natural rubber the negative effects on fatigue of passing through zero strain have long been established.

Both models have negative coefficients for the particulates, a positive coefficient for the plasticizer, and a negative coefficient for the interaction between the two particulates; and the relative sizes of the coefficients for the three individual control factors are reasonably parallel in both response equations. This consistency between the models may be taken as an indication of a valid reflection as to how the ingredients affect fatigue in tension for this group of compounds.

The relative size of the two-particulate interaction term coefficients in the two models does differ by a factor of about eight, and the model for the Wallace fatigue also includes an interaction term between the large particulate and the plasticizer. Were it not for these differences the correlation between the two sets of fatigue life numbers would be much higher than 0.33 observed.

The fatigue crack propagation method is a relatively recent technique, for which no commercial equipment is currently available. It differs substantially from both Wallace and Monsanto tests in that the data do not relate to an actual failture time, as when specimens rupture completely. Instead, a carefully measured rate of crack growth at one or more strain energy inputs is determined. Like all fatigue tests, it is subject to scatter, and a sample size of two is very marginal; this is why there is more difference between the two control compounds than in the other fatigue tests, and makes achieving a good fit of the data less likely.

However, a reasonable fit was obtained using a model with all three factors and the two-level interaction. Interestingly, the signs of all coefficients in the model match up consistently as being opposite in sign from corresponding coefficients in the Wallace and Monsanto models; since the higher numbers in FCP relate to worse fatigue while in the other tests they imply better fatigue, this is exactly how the signs should match up if the underlying fatigue mechanisms are truly related.

Also, the correlation between FCP and Wallace numbers is fairly good, at the -0.80 level, and these methods have in common their not allowing the specimens to cycle through zero strain. It might be possible to speculate about the meaning of the comparisons and contrasts between the models in relation to the differences between the modes of fatigue in the respective test, but that tempting topic will not be pursued here.

Several responses can be observed from the Goodrich heat build up (HBU) test, but only two will be analyzed here. They are the immediate percent deflection under the static load, and the difference between initial temperature and internal temperature after 20 minutes of cyclic deflection determined by using a needle probe).

The initial deflection can be fitted very well (98%) by a linear model in which both particulates decrease deflection and the plasticizer increases it. This is as might be expected, since the first two materials raise modulus and plasticizer causes modulus to drop.

Temperature rise can also be fitted well, but the model contains all the main factors plus the three-factor interaction. The particulates have positive coefficients, the plasticizer has a negative one, and the interaction is also positive. This makes sense in the context of higher moduli requiring more work to be done and heat to be liberated accordingly, while the three-factor interaction may relate to increasing total filler fraction raising the overall level of hysteresis and thereby adding further to heat generation.

The key engineering properties are the static shear modulus (G stat), the dynamic shear modulus (G') and the loss factor (tangent delta). Although data were obtained across wide range of shear deflections and frequencies, for the limited analysis appropriate here the data from the single condition of 10% dynamic shear at 10 Hertz will be used.

It was especially for these dynamic properties that the possibility of some significant factor interactions was thought to exist. However, all three properties can be described adequately by simple linear models. The loss factor model in particular has a fit of over 98%. The two moduli have linear fits closely approaching 95%, which can be improved to over 98% by inclusion of interaction terms; considering the minimal contribution of such extra terms the simpler model appears to be both efficient and effective.

Loss factor is increased too by all three ingredients, but proportionately much more so by the highly reinforcing particulate. This relates well to the degree of polymer-filler interaction, percentage of bound rubber, etc., connected to a small particle ingredient, which contribute to energy dissipation.

The intercept for G' is about half again as large as that of G stat. This is due to the normal apparent stiffening of a viscoelastic material associated with higher frequency inputs. Simply put, the modulus of elastomers increases as the rate of deformation increases.

The sensitivity of the formulations to rate of deformation is further evidenced by the changing coefficients derived for each ingredient when G' is measured. The coefficient for B increases by a factor of two when the compound is tested dynamically, much more of an increase than seen in A or C. This again relates to the higher contributions of the small particle size material.

For resilient elastomers the ratio of G' to G stat commonly ranges from perhaps 1.25 to 1.75 or above, depending on the details of the formula. Analysis of the dynamic/static ratio for these compounds results in a very good fit (over 97%) again using the linear model. The coefficients are all positive, with the two particulates being equal in effect and the plasticizer at about half their magnitude. This implies that the polymer has an intrinsic low dynamic/static ratio, which goes up as the volume fraction of the polymer in the compound goes down.

Since the dynamic/static ratio would also be affected by hysteretic mechanisms in the elastomer, a correlation between the ratio and loss factor might be expected. The actual correlation coefficient is 0.993, clearly an excellent matching of properties.

Likewise a relationship between rebound and loss factor might be expected, and indeed is found to exist. In this case the coefficient is 0.904, a good but less than outstanding correlation (table 5). Other close correlations are:

* G stat and HBU initial deflection - 0.941;

* Durometer and G'- 0.955;

* G' and 300% tensile modulus - 0.993.

Once all the models have been developed, they may be used to maximize or minimize a target property while simultaneously predicting the other properties of the particular formulation called out. In the following examples G' is constrained to be between 1.9 and 2.1 MPa:

* For maximizing loss factor the call out is A = 1.97, B -0.374 and C =2, yielding durometer of 67, G' of 2.1 MPa, Wallace fatigue of 322.2 KC and loss factor of 0.283;

* For minimizing loss factor the call out is A = -0.484, B -2 and C = -2, yielding durometer of 64, G' of 1.9 MPa, Wallace fatigue of 172.8 KC and loss factor of 0. 128; For maximizing heat build up the call out is A = 0.481, B = -2 and C = - 1, yielding durometer of 65, G' of 2.1 MPa, loss factor of 0.162, Monsanto fatigue of 70.3 KC and heat build up of 143 [degrees] C;

* For minimizing heat build up the call out is A = -2, B 2 and C = 2, yielding durometer of 57.5, G' of 2 MPa, loss factor of 0.258, Monsanto fatigue of 114.4 KC and a heat build up of 124 [degrees] C. (Note:the normalized levels of the ingredients A, B and C can readily be converted back to parts per hundred rubber to produce a working formulation.) [TABULAR DATA 5 OMITTED]

Naturally no model predicts results with 100% accuracy, but for most of these properties the predictions should prove to be very useful; starting with the call outs from the model it would take very few further compound mixes to optimize a formula for some set of the desired properties.

Several of the analyses were redone without using the data from the two center points. The models derived remained the same, with only minor changes in the coefficients. This indicates that use of the center points to check on nonlinearity is made unnecessary when the complementary half-factorials are employed. The design then becomes even more efficient.

Conclusions

Use of an experimental array comprised of nested half-factorials with or without a center point does provide good data for developing both linear and simple interaction models to describe numerous characteristics of a family of polychloroprene compounds which vary in content of two reinforcing materials and one plasticizer. Only three properties (minimum rheometer reading, ultimate tensile strength and compression set) from a total selection of 16 responses could not be characterized satisfactorily (R-Squared at or above 95%) using the data and linear or interaction models.

The majority of responses proved to be linear, but a few more complex phenomena such as flex fatigue did require interaction terms to properly describe them. Several properties were found to correlate with each other as might be expected from classical views of elastomeric structure-property relationships.

Use of appropriate software to simultaneously solve several sets of response equations makes it possible to predict multiple properties, both static and dynamic, of theoretical formulation.

References

[1.] F.J. Krakowski, "Effect of polymer ratio, black level and distribution, and secondary accelerator level in NR/BR blends," presented at a meeting of the Rubber Division, ACS, Detroit, MI, October 17, 1989. [2.] T.S. Kohli and J.R. Halladay, "Investigation of polybutadiene/bromobutyl blends for low temperature shock and vibration control," presented at a meeting of the Rubber Division, ACS, Toronto, Ontario, Canada, May 23, 1991. [3.] A.N. Gent, P.B. Lindley and A.G. Thomas, J. Appl. Polymer Science, 8, 455, (1964). [4.] P.B. Lindley and A. G. Thomas, Proc. Rubber Tech. Conf., 4th, London, 1962, 428. [5.] A.N. Gent, J. Appl. Polymer Science, 6, 497, (1962). [6.] J.R. Halladay, J.L. Potter and T.S. Kohli, "Shock and vibration control in aerospace applications," presented at a meeting of the Rubber Division, Louisville, KY, May 21, 1992. [7.] R.J. Del Vecchio, F.J. Krakowski and G.T. McKenzie, "Dynamic property variation and analysis in NBR," presented at a meeting of the Rubber Division, ACS, Los Angeles, CA, April 28, 1985.

Since not all properties are thought to relate to levels of compounding materials in a linear fashion, prudent experimental designs will include multiple levels of the control factor ingredients in order to allow fitting of nonlinear models if necessary. Nonlinear models need sufficient degrees of freedom, which imply more experimental runs, to allow generation of the multiple coefficients used to define the model.

Typical designs would include Box-Behnken and Central Composite models. For an experiment with three control factors these models would require 13 and 15 sets of experimental conditions. However, it is always desirable to minimize the number of experimental runs, and the question of whether or not a more limited model can provide sufficient information arises.

The absolute least model to explore the effects of three factors, plus their simple interactions, and also have any check on nonlinearity would be a basic cubic model with a center point. This requires only nine sets of conditions; but with the center point as the single check on nonlinearity the capacity to quantify deviations from a linear model becomes questionable.

Half factorial arrays are frequently used with simple two level experiments to generate valid data from a smaller number of experimental runs. By using two complementary half factorial patterns, one nested within the other, instead of one full factorial it becomes possible to use each factor at four levels instead of just two. In combination with a center point a total of five levels of each factor are used, yet the total number of runs remains at nine. (See figure 1 for an illustration of the experimental volumes.)

It was decided to test out this type of combined array, which is simply described as a nested half factorial (NHF), in connection with a project to evaluate dynamic property control in polychloroprene. The three major ingredients used were a semi-reinforcing particulate, a much smaller highly reinforcing particulate and a petroleum based fluid plasticizer. Since the first ingredient has intrinsically less effect on compound properties, it was used at a concentration much higher than the other two materials; ranges of concentration of the highly reinforcing particulate and plasticizer were identical.

Their full designations are not germane to the evaluation of the NHF array, and they are identified as ingredients A, B and C; levels of use are shown in the formulation in table 1, but for analysis were normalized to -2, -1, 0, 1 and 2.

With the generous co-operation of several companies it was possible to test the compounds in a variety of ways, some very routine, some less often encountered and a few that are not commonly performed. It was anticipated that at least some of the broad selection of test properties would not fit within simple linear models, so a series of regression analyses using several alternate models were employed in order to determine what kind of model might best describe each ingredient versus property relationship.

Experimental

Ten individual lab mill batches were mixed, all based on the formula in table 1. The particular polychloroprene used was a comparatively new type, a peptizable grade (Neoprene GK) selected for its very favorable combination of physical properties and particular suitability for injection molding. The tetraethylthiuram disulfide (ethyl tuads) is the peptizing agent, the octylated diphenyl amine (Agerite Stalite S) is an antidegradant, and the processing aid is a complex mixture (VanFre AP-2). Stearic acid is also an aid to processing, while the two metal oxides constitute the standard cure system for this polymer.

Table 1 - test formula Polychloroprene 100 Stearic acid 1 Tetraethylthiuram disulfide 0.5 Magnesium oxide 4 ODPA antidegradant 2 Ingredient A 20-60 Ingredient B 5-25 Ingredient C 5-25 Zinc oxide 5 Processing aid 2

Both actual and normalized levels of the three variable ingredients are shown in table 2 for all nine formulations. The center point was replicated once, which raised the total of mixes to ten. [TABULAR DATA 2 OMITTED]

Batch mixing was done with the two center point compounds as the first and tenth batches, and the others randomized in between. All mixing was performed on a standard model two-roll laboratory mill on the same day using the same lots of every raw material, broken down and blended in the same procedure by the same pair of operators. After each batch was fully mixed, final homogeneity was obtained by taking the batch off a moderately tight mill into a roll and reinserting the roll into the mill longitudinally a total of at least eight times.

Ingredients were weighed out in 0.1 gram increments on a calibrated triple-beam balance. Final batch weights were checked and mixing losses ranged from 1-1.5%.

Basic rheological testing was performed on an oscillating disk rheometer according to ASTM D2084, using a 3 [degrees] arc, small rotor, 100 range and an 18 minute timer at a temperature of 160 [degrees] C. This temperature was then used to mold all test specimens, at a molding time equal to the compound T90 unless otherwise noted.

Durometer readings were taken from compression set buttons, using the procedure specified in ASTM D2240.

Standard test pads were prepared. Ultimate tensile strength, elongation at break, and the 100, 200 and 300% moduli were then measured for each compound. ASTM procedure D412 was used.

Compression set testing by ASTM D395 Method B was also performed at a temperature of 70 [degrees] C for 70 hrs. on specimens molded for T90 plus 5 minutes. All these data are summarized in table 3 [TABULAR DATA 3 OMITTED]

Also shown in table 3 are the less common test data, including: Lupke rebound as described in The Vanderbilt Rubber Handbook (13th ed., pgs. 529 and 530) and a sample size of two; Goodrich heat build up (ASTM D623) under conditions of a room temperature start and 20 minutes of flexing at 1,800 RPM under a 6.6 KG load and a 4.8 mm stroke (sample size of two); flex fatigue as tested by the Monsanto fatigue to failure machine per ASTM D4482, using cam #14 (100% elongation) and a sample size of six; flex fatigue as tested by the Wallace ring fatigue machine, cycling from 25 to 150% tensile strain with a sample size of six; fatigue as tested using fatigue crack propagation measurements on specialized testing equipment (refs. 3-5) (Lord Corp. Research & Development internal procedures, sample size of two); dynamic property determination using a lap shear specimen and specialized testing equipment (refs. 6 and 7) (Lord Corp. Aerospace Products Division internal procedures, sample size of two).

Results and discussion

The respective levels of ingredients A, B, C were chosen with the intent of having the compounds vary from moderately soft to moderately stiff, perhaps in the range of 40-75 durometer (Shore A), while also varying the hysteresis of the compounds to a significant degree. The remaining properties would then change in whatever degree the system dictated, to be measured and correlated with the primary characteristics of the formulae.

Actual measured durometers went from 43 to 76 points, indicating the general degree of reinforcement/plasticization expected of the control variables was in fact largely as anticipated.

Analysis of durometer data through the regression analysis function of a commercial DOX software package (XStat, John Wiley publishers, New York) yielded an excellent fit with a simple linear model. The percentage of the data explained by the model (R-squared) was 99.2%. The model generated for durometer is of the type:

Y = intercept + [Mathematical Expression Omitted] (1) with Y as the response, X representing a given control factor and C the coefficient matched to each significant factor affecting the response. In this work [X.sub.1] is A, [X.sub.2] is B, etc. Interaction models would contain mixed terms with their own coefficients, for example:

Y = intercept + [Mathematical Expression Omitted]

[Mathematical Expression Omitted] (2) but not every model need contain all possible terms. Scientific conservatism dictates using only those terms needed to adequately describe the response of interest.

In table 4 are shown the intercepts and coefficients for whichever model provided a good fit (>95%) for each of the properties listed in table 3. Since the normalized levels of A, B and C were used in the analysis it becomes easy to associate meaning with each coefficient. [TABULAR DATA 4 OMITTED]

For instance, in regard to durometer the intercept was 66.2, the A coefficient was 4.45, the B coefficient was 3.45 and the C coefficient was -3.35. This means the average durometer for the compounds was 66.2, and that increasing ingredient A to the level normalized as 1 increases the durometer by 4.45 points, while going to the highest level of A (which by comparison is normalized as 2) would increase durometer by 8.9 points.

Increasing ingredient B in the same way would raise durometer by 3.45 or 6.9 points (on average), but raising levels of C would drop hardness by 3.35 or 6.7 points. Using the ingredients at their -1 and -2 levels would result in hardness changes of the same magnitude, but reversed in sign.

Clearly the hardest compound would result from using both particulate materials at their highest levels while using the plasticizer at its lowest level. This is scarcely a surprise to any compounder. What is important is that the model predicts that the durometer would then be 88.7 points, and the prediction is statistically based with a high degree of confidence. While the +2A, +2B, -2C formula was not mixed in the experimental series, the -2A, -2B, +2C formula was. According to the model, its predicted durometer is 43.7 points; the observed number was 43, which is a reasonable match.

Calling for durometer numbers including tenths of a point is of course an unrealistically exact measurement for a handheld indentor instrument, but what is important is that a fairly precise understanding of the response of hardness to the three ingredients has been achieved.

The possibility of an interaction between the small and large particle particulates in their reinforcement of the compound had been considered. However, in this case a basic linear model is quite adequate in describing the response within the ranges of ingredient concentrations used, which were deliberately chosen as moderate in width. (Conventional compounding experience indicates that when really wide ingredient ranges are used, e.g. 0-100 pphr of a reinforcing black or 0-50 pphr of a plasticizing oil, responses begin to deviate substantially from linearity.)

When the rheological properties are examined, linear models allow good fit of T2, T90 and maximum torque. As might have been expected, coefficients for the two reinforcing materials are always opposite in sign from that of the plasticizer (see table 4).

The fourth rheological response measured was minimum viscosity, which only got a moderate fit (88.5%) with a linear model. Use of stepwise regression showed that addition of one particular interaction improved the fit substantially (to 95.5%) while still retaining high confidence levels (>95%) for all coefficients. That interaction is between the plasticizer and the highly reinforcing particulate.

Such interaction terms can be reflective of a physical reality, or they may be mathematical constructs. If the physical interaction does not fall within immediately understandable models, such as between a curative and an accelerator, it is usually good practice to test its validity by examining the predictions of the model. This can be done either by actually making a new experimental run, or simply looking at levels of predicted properties.

In this case the model with the interaction term predicts an absolute minimum viscosity of 0.4 units for a compound with A at -2, B at 2 and C at 2. The lowest minimum viscosity observed in the experimental runs was 1.1 units, (which is low for a polychloroprene compound, and achievable in this case through the peptization of the polymer) and that compound had levels of -2A, -2B and +2C. The likelihood of really dropping the viscosity down from 1.1 to 0.4 units by going from the minimum level of B to its maximum while keeping the other two ingredients the same is very low. This makes the interaction term used more probably a mathematical construct than a true indication of a physical interaction.

However, use of the linear model to calculate lowest possible viscosity also yields an unrealistically low number. Since minimum viscosity numbers from a basic ODR are subject to appreciable scatter, and some curvature of the viscosity response is to be expected as it approaches a lower limit, it is possible that these data simply will not allow any precise and accurate linear or simple interaction model to be developed for the low viscosity response. A full response surface experimental design would be needed to generate the type of data that support a quadratic model, which can much better describe highly nonlinear responses.

The same difficulty in having the experimental data form an acceptable basis for developing a model clearly applies to ultimate tensile strength. A linear model only results in a 77% fit, and although a full interaction model provides a 95% fit, the confidence levels for several of the coefficients are not very high. This implies the better fit is simply due to having more tenus in the equation, rather than having actually achieved a more valid model. Thus the normal degree of scatter in tensile strength determinations seems to constitute a serious problem in regard to precise data analysis.

Interestingly enough, ultimate elongation is also often subject to test scatter, but a linear model provides an excellent fit in this case (>99%). As expected, the two particulate materials decrease clongation while the liquid plasticizer increases it.

Tensile modulus at 300% elongation is fairly well fitted by a linear model (95%), with the reinforcing ingredients raising modulus while the plasticizer decreases it. No alternate model provides any significantly better fit, so interactions again appear to not contribute significantly.

Compression set at 70 [degrees] C averaged slightly above 30% for the compounds, and no model gave a fit over 80% using any group of coefficients which all had high confidence levels. The set resistance may be related primarily to the polymer and its cure system, and not be affected enough by the control variables for clear trends to be discerned.

However, Lupke rebound was modeled well (96% fit) using a two element linear model. Both types of particulate decreased rebound, while the plasticizer had almost no effect. Since any form of powder additive to a compound tends to raise hysteresis that part of the model makes perfect sense. Viscous plasticizers have also been known to increase hysteresis, while low viscosity types sometimes depress the glass transition temperature and thereby affect hysteresis; the lack of observed effect here indicates the plasticizer is neither viscous enough nor interferes with glass transition enough to have an impact on the energy dissipation during the mechanism of rebound in this test.

Fatigue testing is well known for its tendency to display both substantial scatter and poor correlation between different types of test procedures. Testing according to two very different but not totally unrelated fatigue tests was arranged. The Wallace fatigue test uses ring specimens die cut from a molded sheet, which are frequently cycled between some minimum strain and a much greater strain. In this case the minimum strain was 25% and the maximum was 150%, since prior experience with this general type of formulation had shown such levels led to flex lives of moderate length.

The Monsanto flex to failure tester also uses die cut specimens, but they are of the dumbbell type and were flexed from 0 to 100% strain. Both tests were run using sample sizes of six, with the median life figure used for analysis, in order to minimize the effects of scatter. It was not expected that the flex data from these two tests would correlate well, and they did not. The correlation coefficient between the two sets of figures was only 0.33, which is fairly low.

However, the separate regressions on the two sets of data did result in two good fits (both about 93% - unusually high fits for fatigue data) using models with some interaction terms (see table 4). The Wallace fatigue life average is more than three times as many cycles as the Monsanto average despite the use of a significantly higher strain range in the Wallace than in the Monsanto. This may be due mainly to the Monsanto specimens being cycled through zero strain while the Wallace specimens were not; for strain crystallizing polymers such as polychloroprene and natural rubber the negative effects on fatigue of passing through zero strain have long been established.

Both models have negative coefficients for the particulates, a positive coefficient for the plasticizer, and a negative coefficient for the interaction between the two particulates; and the relative sizes of the coefficients for the three individual control factors are reasonably parallel in both response equations. This consistency between the models may be taken as an indication of a valid reflection as to how the ingredients affect fatigue in tension for this group of compounds.

The relative size of the two-particulate interaction term coefficients in the two models does differ by a factor of about eight, and the model for the Wallace fatigue also includes an interaction term between the large particulate and the plasticizer. Were it not for these differences the correlation between the two sets of fatigue life numbers would be much higher than 0.33 observed.

The fatigue crack propagation method is a relatively recent technique, for which no commercial equipment is currently available. It differs substantially from both Wallace and Monsanto tests in that the data do not relate to an actual failture time, as when specimens rupture completely. Instead, a carefully measured rate of crack growth at one or more strain energy inputs is determined. Like all fatigue tests, it is subject to scatter, and a sample size of two is very marginal; this is why there is more difference between the two control compounds than in the other fatigue tests, and makes achieving a good fit of the data less likely.

However, a reasonable fit was obtained using a model with all three factors and the two-level interaction. Interestingly, the signs of all coefficients in the model match up consistently as being opposite in sign from corresponding coefficients in the Wallace and Monsanto models; since the higher numbers in FCP relate to worse fatigue while in the other tests they imply better fatigue, this is exactly how the signs should match up if the underlying fatigue mechanisms are truly related.

Also, the correlation between FCP and Wallace numbers is fairly good, at the -0.80 level, and these methods have in common their not allowing the specimens to cycle through zero strain. It might be possible to speculate about the meaning of the comparisons and contrasts between the models in relation to the differences between the modes of fatigue in the respective test, but that tempting topic will not be pursued here.

Several responses can be observed from the Goodrich heat build up (HBU) test, but only two will be analyzed here. They are the immediate percent deflection under the static load, and the difference between initial temperature and internal temperature after 20 minutes of cyclic deflection determined by using a needle probe).

The initial deflection can be fitted very well (98%) by a linear model in which both particulates decrease deflection and the plasticizer increases it. This is as might be expected, since the first two materials raise modulus and plasticizer causes modulus to drop.

Temperature rise can also be fitted well, but the model contains all the main factors plus the three-factor interaction. The particulates have positive coefficients, the plasticizer has a negative one, and the interaction is also positive. This makes sense in the context of higher moduli requiring more work to be done and heat to be liberated accordingly, while the three-factor interaction may relate to increasing total filler fraction raising the overall level of hysteresis and thereby adding further to heat generation.

The key engineering properties are the static shear modulus (G stat), the dynamic shear modulus (G') and the loss factor (tangent delta). Although data were obtained across wide range of shear deflections and frequencies, for the limited analysis appropriate here the data from the single condition of 10% dynamic shear at 10 Hertz will be used.

It was especially for these dynamic properties that the possibility of some significant factor interactions was thought to exist. However, all three properties can be described adequately by simple linear models. The loss factor model in particular has a fit of over 98%. The two moduli have linear fits closely approaching 95%, which can be improved to over 98% by inclusion of interaction terms; considering the minimal contribution of such extra terms the simpler model appears to be both efficient and effective.

Loss factor is increased too by all three ingredients, but proportionately much more so by the highly reinforcing particulate. This relates well to the degree of polymer-filler interaction, percentage of bound rubber, etc., connected to a small particle ingredient, which contribute to energy dissipation.

The intercept for G' is about half again as large as that of G stat. This is due to the normal apparent stiffening of a viscoelastic material associated with higher frequency inputs. Simply put, the modulus of elastomers increases as the rate of deformation increases.

The sensitivity of the formulations to rate of deformation is further evidenced by the changing coefficients derived for each ingredient when G' is measured. The coefficient for B increases by a factor of two when the compound is tested dynamically, much more of an increase than seen in A or C. This again relates to the higher contributions of the small particle size material.

For resilient elastomers the ratio of G' to G stat commonly ranges from perhaps 1.25 to 1.75 or above, depending on the details of the formula. Analysis of the dynamic/static ratio for these compounds results in a very good fit (over 97%) again using the linear model. The coefficients are all positive, with the two particulates being equal in effect and the plasticizer at about half their magnitude. This implies that the polymer has an intrinsic low dynamic/static ratio, which goes up as the volume fraction of the polymer in the compound goes down.

Since the dynamic/static ratio would also be affected by hysteretic mechanisms in the elastomer, a correlation between the ratio and loss factor might be expected. The actual correlation coefficient is 0.993, clearly an excellent matching of properties.

Likewise a relationship between rebound and loss factor might be expected, and indeed is found to exist. In this case the coefficient is 0.904, a good but less than outstanding correlation (table 5). Other close correlations are:

* G stat and HBU initial deflection - 0.941;

* Durometer and G'- 0.955;

* G' and 300% tensile modulus - 0.993.

Once all the models have been developed, they may be used to maximize or minimize a target property while simultaneously predicting the other properties of the particular formulation called out. In the following examples G' is constrained to be between 1.9 and 2.1 MPa:

* For maximizing loss factor the call out is A = 1.97, B -0.374 and C =2, yielding durometer of 67, G' of 2.1 MPa, Wallace fatigue of 322.2 KC and loss factor of 0.283;

* For minimizing loss factor the call out is A = -0.484, B -2 and C = -2, yielding durometer of 64, G' of 1.9 MPa, Wallace fatigue of 172.8 KC and loss factor of 0. 128; For maximizing heat build up the call out is A = 0.481, B = -2 and C = - 1, yielding durometer of 65, G' of 2.1 MPa, loss factor of 0.162, Monsanto fatigue of 70.3 KC and heat build up of 143 [degrees] C;

* For minimizing heat build up the call out is A = -2, B 2 and C = 2, yielding durometer of 57.5, G' of 2 MPa, loss factor of 0.258, Monsanto fatigue of 114.4 KC and a heat build up of 124 [degrees] C. (Note:the normalized levels of the ingredients A, B and C can readily be converted back to parts per hundred rubber to produce a working formulation.) [TABULAR DATA 5 OMITTED]

Naturally no model predicts results with 100% accuracy, but for most of these properties the predictions should prove to be very useful; starting with the call outs from the model it would take very few further compound mixes to optimize a formula for some set of the desired properties.

Several of the analyses were redone without using the data from the two center points. The models derived remained the same, with only minor changes in the coefficients. This indicates that use of the center points to check on nonlinearity is made unnecessary when the complementary half-factorials are employed. The design then becomes even more efficient.

Conclusions

Use of an experimental array comprised of nested half-factorials with or without a center point does provide good data for developing both linear and simple interaction models to describe numerous characteristics of a family of polychloroprene compounds which vary in content of two reinforcing materials and one plasticizer. Only three properties (minimum rheometer reading, ultimate tensile strength and compression set) from a total selection of 16 responses could not be characterized satisfactorily (R-Squared at or above 95%) using the data and linear or interaction models.

The majority of responses proved to be linear, but a few more complex phenomena such as flex fatigue did require interaction terms to properly describe them. Several properties were found to correlate with each other as might be expected from classical views of elastomeric structure-property relationships.

Use of appropriate software to simultaneously solve several sets of response equations makes it possible to predict multiple properties, both static and dynamic, of theoretical formulation.

References

[1.] F.J. Krakowski, "Effect of polymer ratio, black level and distribution, and secondary accelerator level in NR/BR blends," presented at a meeting of the Rubber Division, ACS, Detroit, MI, October 17, 1989. [2.] T.S. Kohli and J.R. Halladay, "Investigation of polybutadiene/bromobutyl blends for low temperature shock and vibration control," presented at a meeting of the Rubber Division, ACS, Toronto, Ontario, Canada, May 23, 1991. [3.] A.N. Gent, P.B. Lindley and A.G. Thomas, J. Appl. Polymer Science, 8, 455, (1964). [4.] P.B. Lindley and A. G. Thomas, Proc. Rubber Tech. Conf., 4th, London, 1962, 428. [5.] A.N. Gent, J. Appl. Polymer Science, 6, 497, (1962). [6.] J.R. Halladay, J.L. Potter and T.S. Kohli, "Shock and vibration control in aerospace applications," presented at a meeting of the Rubber Division, Louisville, KY, May 21, 1992. [7.] R.J. Del Vecchio, F.J. Krakowski and G.T. McKenzie, "Dynamic property variation and analysis in NBR," presented at a meeting of the Rubber Division, ACS, Los Angeles, CA, April 28, 1985.

Printer friendly Cite/link Email Feedback | |

Title Annotation: | design-of-experiments computer program |
---|---|

Author: | Del Vecchio, R.J. |

Publication: | Rubber World |

Article Type: | Product/Service Evaluation |

Date: | Feb 1, 1993 |

Words: | 4579 |

Previous Article: | Using software to integrate the mixing shop. |

Next Article: | Optimization strategies for the mixing room. |

Topics: |