# Assessing differences between results determined according to the Guide to the Expression of Uncertainty in Measurement.

In some metrology applications multiple results of measurement for a common measurand are obtained and it is necessary to determine whether the results agree with each other. A result of measurement based on the Guide to the Expression of Uncertainty in Measurement (GUM) consists of a measured value together with its associated standard uncertainty. In the GUM, the measured value is regarded as the expected value and the standard uncertainty is regarded as the standard deviation, both known values, of a state-of-knowledge probability distribution. A state-of-knowledge distribution represented by a result need not be completely known. Then how can one assess the differences between the results based on the GUM? Metrologists have for many years used the Birge chi-square test as 'a rule of thumb' to assess the differences between two or more measured values for the same measurand by pretending that the standard uncertainties were the standard deviations of the presumed sampling probability distributions from random variation of the measured values. We point out that this is misuse of the standard uncertainties; the Birge test and the concept of statistical consistency motivated by it do not apply to the results of measurement based on the GUM. In 2008, the International Vocabulary of Metrology, third edition (VIM3) introduced the concept of metrological compatibility. We propose that the concept of metrological compatibility be used to assess the differences between results based on the GUM for the same measurand. A test of the metrological compatibility of two results of measurement does not conflict with a pairwise Birge test of the statistical consistency of the corresponding measured values.Key words: Birge test; interlaboratory evaluations; predictive p-value; uncertainty

Accepted: October 14, 2010

Available online: http://www.nist.gov/jres

1. Introduction

To test the proficiency of individual laboratories in conducting specific tasks, interlaboratory comparisons (ILC) are often used. In ILC between measurement laboratories, the task is generally the measurement of a common artifact or fractions of the same sample of material. To develop a certified reference material, a well characterized material is measured by two or more methods in one or more laboratories. In both cases the data consist of multiple results of measurement (measured values with associated uncertainties) of a common measurand. To assess the differences between two or more measured values for the same measurand, metrologists have for many years used a test proposed by physicist Raymond T. Birge in 1932 (1). Birge introduced the term consistency for lack of significant differences between measured values. The Birge test is based on treating the measured values as realizations of random draws from sampling probability density functions (pdfs). A sampling pdf models possible outcomes for measured values in contemplated replications of the measurement procedure in the same conditions. Therefore, the consistency of measured values assessed by the Birge test is statistical consistency. The Birge test applies to uncorrelated measured values only. In Sec. 2, we review a concept of statistical consistency motivated by the Birge test. The idea of statistical consistency belongs to the period when the error analysis view of measurements was prevalent. The error analysis view of measurements was a hindrance to communicating the results of measurement and in advancing the science and technology of measurement. Therefore leading authorities in the field of metrology developed the Guide to the Expression of Uncertainty in Measurement (GUM) (2). According to the GUM, a result of measurement consists of a measured value together with its associated standard uncertainty. In the GUM, the measured value is regarded as the expected value and the standard uncertainty is regarded as the standard deviation, both known values, of a state-of-knowledge probability distribution. A state-of-knowledge distribution represented by a result of measurement need not be completely known. We note in Sec. 3 that the Birge test and the concept of statistical consistency motivated by it are not applicable to the results of measurement based on the GUM. Then how can one assess the differences between results based on the GUM for the same measurand? In 2008, the International Vocabulary of Metrology, third edition (VIM3) (3) introduced the concept of metrological compatibility of two or more results of measurement determined according to the (GUM). In Sec. 4, we review the VIM3 concept of metrological compatibility and propose that this concept be used to assess the differences between multiple results based on the GUM for the same measurand. In Sec. 5, we show that a test of the metrological compatibility of two results of measurement does not conflict with a pairwise Birge test of the statistical consistency of the corresponding measured values.

2. The Birge Test and Concept of Statistical Consistency

Suppose [x.sub.1], ... , [x.sub.n] are n measured values for a common measurand which is believed to be sufficiently stable. The Birge test is based on regarding the measured values [x.sub.1], ... , [x.sub.n] as realizations of random draws from their presumed sampling pdfs. A sampling pdf models possible outcomes in contemplated replications of a measurement procedure subject to random effects in the same conditions. Therefore, the consistency (lack of significant differences between measured values) assessed by the Birge test is statistical consistency. The Birge test is applicable when the sampling pdfs of the measured values [x.sub.1], ... , [x.sub.n] are uncorrelated. The Birge test requires knowledge of the variances [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2] of the sampling pdfs of respectively. Statistical consistency of the measured values [x.sub.1], ... , [x.sub.n] means that their expected values are indistinguishable (1) in view of the corresponding variances. Specifically, the Birge test checks whether the measured values [x.sub.1], ... , [x.sub.n] may be modeled as realizations from normal (Gaussian) sampling pdfs with unknown but equal expected values and known variances [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2]. Birge proposed that to check the consistency of the measured values [x.sub.1], ... , [x.sub.n], one can calculate the test statistic

[R.sup.2] = [n.summation over (i=1)][w.sub.i][([x.sub.i] - [x.sub.w]).sup.2]/(n - 1), (1)

where [w.sub.i] = 1/[[sigma].sub.i.sup.2], for i = 1, 2, ... , n, and [x.sub.W] = [[SIGMA].sub.i][w.sub.i][x.sub.i]/[[SIGMA].sub.i][w.sub.i] is the weighted mean of [x.sub.1], ... , [x.sub.n]. If the calculated value of [R.sup.2] is substantially larger than one, then the dispersion of [x.sub.1], ... , [x.sub.n] is greater than what can be expected from the normal pdfs with equal expected values and known variances [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2]. In that case the measured values [x.sub.1], ... , [x.sub.n] can be declared to be statistically inconsistent.

Statistical interpretation of the Birge test: Birge was a physicist and he proposed his test independently of and before much of the statistical theory as it is known today was established. However, the Birge test of consistency can now be interpreted as a classical (sampling theory) statistical test of hypothesis. The measured values [x.sub.1], ... , [x.sub.n] are presumed to have normal sampling pdfs with unknown but equal expected values and variance-covariance matrix [[tau].sup.2] x Diag [[[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2]], where [[tau].sup.2] is an unknown parameter and [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2] are known. The null hypothesis [H.sub.0] is that [[tau].sup.2] [less than or equal to] 1 and the alternative hypothesis [H.sub.1] is that [[tau].sup.2] > 1. The null hypothesis [H.sub.0] means that the variances of [x.sub.1], ... , [x.sub.n] are not greater than [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2], respectively. The alternative hypothesis [H.sub.1] means that the variances of [x.sub.1], ... , [x.sub.n] are greater than [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2] (4). The classical p-value [p.sub.C] is the maximum probability under the null hypothesis of realizing in contemplated replications of the n measurements a value of the test statistic more extreme than its realized (calculated) value. The classical p-value of a realization of (n - 1) [R.sup.2] is

[P.sub.C] = Pr{[[chi].sub.(n-1).sup.2] [greater than or equal to] (n - 1)[R.sup.2]}, (2)

where [[chi].sub.(n-1).sup.2] denotes a variable with the chi-square probability distribution with degrees of freedom (n - 1) (4). If the classical p-value [p.sub.C] is too small, say, less than 0.05, then the null hypothesis is rejected with level of significance 0.05 or less. A rejection of the null hypothesis means that the dispersion of the measured values [x.sub.1], ... , [x.sub.n] is greater than what can be expected from normal distributions for [x.sub.1], ... , [x.sub.n] with equal expected values and stated variances [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2], respectively. The dispersion of [x.sub.1], ... , [x.sub.n] can be greater than expected under the null hypothesis because either the variances of [x.sub.1], ... , [x.sub.n] are greater than [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2] or their expected values are not equal. If the stated variances [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2] are not questionable then the assumption that the expected values of [x.sub.1], ... , [x.sub.n] are equal appears to be unreasonable. In that case, the measured values [x.sub.1], ... , [x.sub.n] can be declared to be statistically inconsistent.

Limitations of the Birge test: A limitation of the Birge test is that it is applicable for uncorrelated measured values [x.sub.1], ... , [x.sub.n] only. However, it can be easily generalized to correlated measured values [x.sub.1], ... , [x.sub.n] whose covariances denoted by [[sigma].sub.1.sup.2], ... , [[sigma].sub.(n - 1)n] are known (4). The Birge test suggests the following notion of the statistical consistency of the measured values [x.sub.1], ... , [x.sub.n]: The measured values x = [([x.sub.1], ... , [x.sub.n]).sup.t] are said to be statistically consistent if their dispersion is not greater than what can be expected from the normal consistency model which postulates that the joint n-variate sampling pdf of x is normal N(1[micro], D) with unknown expected value 1[micro] and variance-covariance matrix D = [[[sigma].sub.ij] ], where 1 = [(1, ... , 1).sup.t], [[sigma].sub.ij] is the covariance between [x.sub.i] and [x.sub.j], and [[sigma].sub.ii] = [[sigma].sub.i.sup.2] for i, j = 1, 2, ... , n (4).

Another limitation of the Birge test (and of its generalized version for correlated measured values) is that it is a one sided test of hypothesis which checks whether the dispersion of [x.sub.1], ... , [x.sub.n] is more than what can be expected from a normal consistency model. A review of the Birge test in (5) notes that if the realized value of the Birge test statistic [R.sup.2] is substantially less than one, then the stated variances [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2] may well be too large. To avoid declarations of statistical consistency from overstated variances, the following definition of statistical consistency was proposed in (6).

Definition of statistical consistency: The measured values x = [([x.sub.1], ... , [x.sub.n]).sup.t] are said to be statistically consistent if they reasonably fit the normal consistency model which postulates that the joint n-variate sampling pdf of x is normal N(1[micro], D) with unknown expected value 1[micro] and variance-covariance matrix D = [[[sigma].sub.ij]].

This definition requires a different approach for testing statistical consistency than the Birge test and its generalized version for correlated values. A modern method to assess the fit of a statistical model to the data is Bayesian posterior predictive checking (6). Posterior predictive checking is a Bayesian adaptation of the classical (sampling theory) statistical hypothesis testing. A function of the data (and possibly unknown parameters) called 'discrepancy measure' is defined to characterize a potential discrepancy between the statistical model and the data. The posterior predictive p-value [p.sub.P] of adiscrepancy measure T(x) is the probability of realizing in contemplated replications a value of the discrepancy measure more extreme than its realized value. If the posterior predictive p-value is close to zero (or to one) then the fit of the statistical model to data is suspect.

If the measured values [x.sub.1], ... , [x.sub.n] were uncorrelated, then the statistic [T.sub.c] (x) = (n - 1) [R.sup.2] = [[SIGMA].sub.i][w.sub.i][([x.sub.i] - [x.sub.W]).sup.2] is a useful discrepancy measure to check the overall fit of the normal consistency model N(1[micro], D) to the measured values [x.sub.1], ... , [x.sub.n]. As discussed in [6, Sec. 2.4], the posterior predictive p-value of the realized discrepancy measure [T.sub.c] (x) = (n - 1)[R.sup.2] is

[P.sub.p] = Pr{[[chi].sub.(n - 1).sup.2] [greater than or equal to] (n - 1)[R.sup.2]}. (3)

We note that (3) is identical to the classical p-value [p.sub.C] given in (2). Thus Bayesian posterior predictive checking of the discrepancy measure [T.sub.c](x) = (n - 1)[R.sup.2] is equivalent to the Birge test of statistical consistency.

Bayesian posterior predictive checking can be used to investigate any number of potential discrepancies between the statistical model and the data. To assess the difference between two particular measured values [x.sub.i] and [x.sub.j], the statistic [T.sub.i - j](x) = [absolute value of ([x.sub.i] - [x.sub.j])] is a useful discrepancy measure, for i, j = 1, 2, ... , n and i [not equal to] j. The Bayesian posterior predictive p-value of the realized discrepancy measure [absolute value of ([x.sub.i] - [x.sub.j])] is

[P.sub.p] = Pr{Z [greater than or equal to] [absolute value of [[x.sub.i] - [x.sub.j]]]/[square root of ([[sigma].sub.i.sup.2] + [[sigma].sub.j.sup.2] - 2[[rho].sub.ij][[ sigma].sub.i][[ sigma].sub.j])]}, (4)

where [[rho].sub.ij] is the correlation coefficient between the presumed normal sampling pdfs of [x.sub.i] and [x.sub.j]; the covariance between [x.sub.i] and [x.sub.j] is [[sigma].sub.ij] = [[rho].sub.ij] [[sigma].sub.i] [[sigma].sub.j], and Z denotes a variable with standard normal distribution N(0, 1) [6, Sec. 3.2]. A posterior predictive p-value [p.sub.P] close to zero suggests that the difference between [x.sub.i] and [x.sub.j] is larger than what can be expected from the normal statistical consistency model N(1[micro], D). That is, the measured values [x.sub.i] and [x.sub.j] do not seem to have the same expected value and hence they are not mutually statistically consistent.

3. Concept of Statistical Consistency Does Not Apply to Results Based on the GUM

A result of measurement determined according to the GUM consists of a measured value together with its associated standard uncertainty. Suppose [[x.sub.1], u([x.sub.1])], ... , [[x.sub.n], u([x.sub.n])] are n results of measurement for a common measurand, where [x.sub.1], ... , [x.sub.n] are the measured values and u([x.sub.1]), ... , u([x.sub.n]) are the corresponding standard uncertainties. According to the GUM, a measured value [x.sub.i] and its associated standard uncertainty u([x.sub.i]) represent a state-of-knowledge pdf attributed to the measurand, for i = 1, 2, ... , n. Following the GUM, we use the symbol [X.sub.i] for a quantity as well as for a variable with a state-of-knowledge pdf about the quantity [X.sub.i] represented by the result [[x.sub.i], u([x.sub.i])], for i = 1, 2, ... , n. The measured value [x.sub.i] is regarded as the expected value E([X.sub.i]) and the standard uncertainty u([x.sub.i]) is regarded as the standard deviation S([X.sub.i]) of the pdf of [X.sub.i], for i = 1, 2, ... , n. The mainstream GUM requires knowledge of only the expected value E([X.sub.i]) and the standard deviation S([X.sub.i]) of a state-of-knowledge pdf of [X.sub.i]. The GUM does not require that the state-of-knowledge pdf of [X.sub.i] be completely known. When the state-of-knowledge pdfs of [X.sub.1], ... , [X.sub.n] are correlated, the correlation coefficients are assumed to be known. Following the GUM we denote the correlation coefficient R([X.sub.i], [X.sub.j]) between the state-of-knowledge pdfs of [X.sub.i] and [X.sub.j] by the symbol r([x.sub.i], [x.sub.j]). Note that {[x.sub.1], ... , [x.sub.n]}, {u([x.sub.1]), ... , u([x.sub.n])}, and {r([x.sub.1], [x.sub.2]), ... , r([x.sub.(n - 1)], [x.sub.n])} are symbols for known values.

For many years, metrologists have used the Birge test as 'a rule of thumb' to assess the consistency of the measured values by treating the squared standard uncertainties [u.sup.2]([x.sub.1]), ... , [u.sup.2]([x.sub.n]) as the known variances [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2] of the presumed normal (Gaussian) sampling pdfs of the measured values [x.sub.1], ... , [x.sub.n]; see, for example (8). The guideline for the analysis of key comparisons developed by the BIPM Director's Advisory Group on Uncertainties recommends the use of Birge chi-square test to assess the consistency of measured values by treating the squared standard uncertainties as the known variances of the presumed sampling pdfs of the measured values (9). The consistency of the measured values from CIPM key comparisons and supplementary comparisons is almost always assessed using the Birge test (10).

The squared standard uncertainties [u.sup.2]([x.sub.1]), ... , [u.sup.2]([x.sub.n]) cannot in any logical sense be identified with the known variances [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2] of the presumed normal (Gaussian) sampling pdfs of the measured values [x.sub.1], ... , [x.sub.n]. The standard deviation of a sampling pdf represents possible dispersion from random variation in contemplated replications of the measurement procedures. A standard uncertainty expresses the dispersion of a state-of-knowledge pdf which could be attributed to the measurand based on all available statistical and non-statistical information. A standard uncertainty includes all significant components whether arising from random effects or from corrections applied for systematic effects. All available statistical and non-statistical information is used to evaluate a standard uncertainty. In measurements done in high echelon laboratories, the component of uncertainty arising from random effects is generally a very small part of the combined standard uncertainty. Treating the squared standard uncertainties [u.sup.2]([x.sub.1]), ... , [u.sup.2]([x.sub.n]) determined according to the GUM as the known variances [[sigma].sub.1.sup.2], ... , [[sigma].sub.n.sup.2] from random variation (in contemplated replications of the measurements) is a misuse of the standard uncertainties. Also, as noted earlier, the state-of-knowledge pdfs represented by the results [[x.sub.1], u([x.sub.1])], ... , [[x.sub.n], u([x.sub.n])] may not be completely known. Therefore the Birge test and the concept of statistical consistency motivated by the Birge test do not apply to the results of measurement determined according to the GUM.

4. VIM3 Concept of Metrological Compatibility Applies to Results Based on the GUM

A measured quantity value [3, definitions 1.19 and 2.10] is a product of a numerical value and a measurement unit. The measurement unit implies that the measured value is traceable to a reference for that measurement unit. A result of measurement (measured value together with its associated standard uncertainty) is traceable to a reference only if the result can be related to a practical realization of that reference through a documented unbroken chain of calibrations each contributing to the measurement uncertainty [3, definition 2.41]. Two or more results of measurement are metrologically comparable only if they are traceable to the same reference [3, definition 2.46]. Metrological comparability does not imply that the measured values have similar magnitudes. Thus, for example, distance between my apartment and my office expressed in meters is metrologically comparable to the distance between my apartment and the moon also expressed in meters. The concept of metrological compatibility discussed in the next section applies only to those results of measurement for a common measurand which are metrologically comparable. That is, the results must be traceable to the same reference.

The concept of statistical consistency can be applied to any set of numerical values which have similar magnitudes. They do not have to be measured values. Thus, for example, one can test statistical consistency of deviations (or relative deviations expressed as percentage) from a benchmark value. Although a metrologist is expected to assess consistency of only those measured values which have the same measurement unit, it is not a requirement of statistical consistency.

All n results [[x.sub.1], u([x.sub.1])], ... , [[x.sub.n], u([x.sub.n])] for a common measurand must be traceable to the same reference for them to be metrologically comparable [3, definition 2.46]. The VIM3 concept of metrological compatibility is defined for two results of measurement at a time. The following definition is an elaboration of the succinct definition given in VIM3 [3, definition 2.47].

Definition of metrological compatibility: Two metrologically comparable results [[x.sub.1], u([x.sub.1])] and [[x.sub.2], u([x.sub.2])] for the same measurand are said be metrologically compatible if

[xi]([x.sub.1] - [x.sub.2]) = [absolute value of ([x.sub.1] - [x.sub.2])]/[square root of ([u.sup.2](x.sub.1) + [u.sup.2](x.sub.2) - 2r([x.sub.1], [x.sub.2])u([x.sub.1])u([x.sub.2]))] [less than or equal to] [kappa], (5)

for a specified threshold [kappa], where r ([x.sub.1], [x.sub.2]) is a symbol for the correlation coefficient R([X.sub.1], [X.sub.2]) between the variables [X.sub.1] and [X.sub.2]. The quantity in the denominator of (5) is the standard deviation of the state-of-knowledge pdf for [X.sub.1] - [X.sub.2], which may be incompletely determined. When the pdfs represented by [[x.sub.1], u([x.sub.1])] and [[x.sub.2], u([x.sub.2])] are uncorrelated, then R([X.sub.1], [X.sub.2]) = 0 and (5) reduces to

[xi]([x.sub.1] - [x.sub.2]) = [[absolute value of ([x.sub.1] - [x.sub.2])]/[square root of ([u.sup.2]([x.sub.1]) + [u.sup.2]([x.sub.2]))]] [less than or equal to] [kappa]. (6)

A set of metrologically comparable results [[x.sub.1], u([x.sub.1])], [[x.sub.2], u([x.sub.2])], ... , [[x.sub.n], u([x.sub.n])] for the same measurand is said be metrologically compatible if for every one of the n(n - 1)/2 pairs of results [[x.sub.i], u([x.sub.i])] and [[x.sub.j], u([x.sub.j] )] we have

[xi]([x.sub.i] - [x.sub.j]) = [absolute value of ([x.sub.i] - [x.sub.j])]/ [absolute value of ([u.sup.2]([x.sub.i]) + [u.sup.2] ([x.sub.j]) - 2r([x.sub.i], [x.sub.j])u([x.sub.i])u([x.sub.j]))] [less than or equal to] [kappa], (7)

for a specified threshold [kappa] [3, definition 2.47]. The VIM3 does not discuss how the threshold [kappa] should be determined. A conventional value of [kappa] is two.

The concept of metrological compatibility can be used to assess the differences between the results of measurement based on the GUM for the same measurand. The concepts of metrological comparability and compatibility do not require that the state-of-knowledge pdfs represented by the results [[x.sub.1], u([x.sub.1])], [[x.sub.2], u([x.sub.2])], ... , [[x.sub.n], u([x.sub.n])] be completely known. Thus they fit the GUM. When the set of results [[x.sub.1], u([x.sub.1])], ... , [[x.sub.n], u([x.sub.n])] is metrologically compatible, we can say that the differences between the measured values [x.sub.1], ... , [x.sub.n] are insignificant in view of the uncertainties u([x.sub.1]), ... , u([x.sub.n]).

To assess metrological compatibility of results based on the GUM using the criteria (5), (6), or (7), the threshold [kappa] needs to be specified. A proper choice of [kappa] is to a large extent a matter of agreement because it requires accepting the economic consequences of that choice. Although a conventional value of [kappa] is two, depending on the application, the interested parties could agree on a different value for [kappa]. Once the value of the threshold [kappa] is set the conclusion of a test of metrological compatibility based on the VIM3 definition is dichotomous, either a set of results is metrologically compatible or incompatible. The concept of metrological compatibility is being used by metrologists who are familiar with it; see for example (11), (12).

The VIM3 definition of metrological compatibility can be easily extended to metrological compatibility of a set of results and a reference result [[x.sub.R], u([x.sub.R] )], where [x.sub.R] is the reference value with standard uncertainty u([x.sub.R] ). Suppose the pdfs represented by the measurement results are uncorrelated with the pdf represented by the reference result. A set of results [[x.sub.1], u([x.sub.1])], ... , [[x.sub.n], u([x.sub.n])] metrologically comparable with a reference result [[x.sub.R], u([x.sub.R] )] is compatible if

[xi]([x.sub.i] - [x.sub.R]) = [absolute value of ([x.sub.i] - [x.sub.R])]/[square root of ([u.sup.2]([x.sub.i]) + [u.sup.2]([x.sub.R]))] [less than or equal to] [kappa], (8)

for i = 1, 2, ... , n (13). Similarly a set of results [[x.sub.1], u([x.sub.1])], ... , [[x.sub.n], u([x.sub.n])] metrologically comparable with a combined result [[x.sub.C], u([x.sub.C])], where [x.sub.C] is the combined value (such as arithmetic mean or a weighted mean) with standard uncertainty u([x.sub.C]) is compatible if

[xi]([x.sub.i] - [x.sub.C]) = [absolute value of ([x.sub.i] - [x.sub.C])]/[square root of ([u.sup.2]([x.sub.i]) + [u.sup.2]([x.sub.C]) - 2r([x.sub.i], [x.sub.C])u([x.sub.i])u([x.sub.C]))] [less than or equal to] [kappa], (9)

where r ([x.sub.i], [x.sub.C]) denotes the correlation coefficient between the pdfs represented by [[x.sub.i], u([x.sub.i])] and [[x.sub.C], u([x.sub.C])], for i = 1, 2, ... , n (13).

5. Concluding Remarks

For many years, metrologists have used the Birge chi-square test as 'a rule of thumb' to assess the differences between two or more measured values for the same measurand by pretending that the squared standard uncertainties were the known variances of the presumed normal sampling pdfs of the measured values. This is misuse of the standard uncertainties based on the GUM. The Birge test and the concept of statistical consistency do not apply to the results of measurement based on the GUM. As discussed in this paper, the VIM3 concept of metrological compatibility can be used to assess the differences between the results of measurement determined according to the GUM. Thus metrologists can start using the VIM3 concept of metrological compatibility in place of the Birge test to assess the differences between multiple results of measurement of the same measurand.

The following is a pertinent question. Could the conclusions (about mutual agreement of results) based on the VIM3 concept of metrological compatibility and the Birge test (based on treating squared standard uncertainties as the known variances of sampling pdfs of measured values) differ? It is difficult to directly compare the Birge test and a test of metrological compatibility because the former is defined for an arbitrary positive integer n > 1 and the latter is defined for only two results at a time. For pairwise comparisons (n = 2), the Birge test statistic [R.sup.2] = [[SIGMA].sub.i] [w.sub.i] [([x.sub.i] - [x.sub.W]).sup.2]/(n - 1) reduces to

[R.sup.2] = [([x.sub.1] - [x.sub.2]).sup.2]/[([[sigma].sub.1.sup.2] + [[sigma].sub.2.sup.2])], (10)

which is square of ([x.sub.1] - [x.sub.2])/[check]([[sigma].sub.1.sup.2] + [[sigma].sub.2.sup.2]). Under the null hypothesis that the presumed normal sampling pdfs of [x.sub.1] and [x.sub.2] have the same expected value, the distribution of ([x.sub.1] - [x.sub.2])/[check]([[sigma].sub.1.sup.2] + [[sigma].sub.2.sup.2]) is normal N(0, 1). Therefore when n = 2, the normal distribution can be used to assess the absolute difference [absolute value of ([x.sub.1] - [x.sub.2])]. The square of a normal N(0, 1) variable has the chi-square distribution [[chi].sub.(1).sup.2] with degrees of freedom 1. Therefore the square of the (1-[alpha]/2) x 100-th percentile [z.sub.[1-[alpha]/2]] of normal N(0, 1) distribution is equal to the (1 - [alpha]) x 100-th percentile [[chi].sub.(1)[1 - [alpha]].sup.2] of [[chi].sub.(1).sup.2] distribution. Thus the realized value of (10) being less than [[chi].sub.(1)[1-[alpha]].sup.2]] is equivalent to the ratio ([x.sub.1] - [x.sub.2])/[check]([[sigma].sub.1.sup.2] + [[sigma].sub.2.sup.2]) being less than [z.sub.[1-[alpha]/2]]. It follows that declaration of Birge statistical consistency when the classical p-value [p.sub.C] of the Birge test (2) is less than 0.05 (for example) is equivalent to the realization that

[[absolute value of ([x.sub.1] - [x.sub.2])]/[square root of (([[sigma].sub.1.sup.2] + [[sigma].sub.2.sup.2]))]] [less than or equal to] [z.sub.[0.975]] = 1.96 [approximately equal to] 2. (11)

We note from (6) and (11) that if the threshold [kappa] for metrological compatibility is set as [kappa] = 2 then the conclusion of a check of metrological compatibility between a pair of results [[x.sub.1], u([x.sub.1])] and [[x.sub.2], u([x.sub.2])] would be identical to the assessment of statistical consistency between [x.sub.1] and [x.sub.2] based on the Birge test by (wrongly) treating [u.sup.2] ([x.sub.1]) and [u.sup.2]([x.sub.2]) as [[sigma].sub.1.sup.2] and [[sigma].sub.2.sup.2], respectively (and treating the correlation coefficient R([X.sub.1], [X.sub.2]) as [[rho].sub.12] which is zero in the Birge test). Therefore a pairwise Birge test of statistical consistency and a test of metrological compatibility do not conflict.

Acknowledgments

We thank Javier Bernal, Tyler Estler, Walter Liggett, and Raju Datla for their comments on earlier drafts of this paper.

(1.) In statistical literature the term consistency is applied to a statistical estimator. A point statistical estimator is said to be consistent if it approaches the parameter being estimated as the sample size increases.

6. References

(1.) R. T. Birge, The calculation of errors by the method of least squares, Physical Review 40, 207-227 (1932).

(2.) GUM (1995), Guide to the Expression of Uncertainty in Measurement, 2nd ed. (Geneva: International Organization for Standardization) ISBN 92-67-10188-9 (2008 version available at http://www.bipm.org/utils/common/documents/jcgm/JCGM_100_2008_E.pdf)

(3.) BIPM/JCGM (2008), International Vocabulary of Metrology--Basic and general concepts and associated terms, 3rd ed. (Sevres: Bureau International des Poids et Mesures, Joint Committee for Guides in Metrology) (available at http://www.bipm.org/utils/common/documents/jcgm/JCGM_200_2008.pdf)

(4.) R. N. Kacker, A. B. Forbes, R. Kessel, and K. Sommer, Classical and Bayesian interpretation of the Birge test of consistency and its generalized version for correlated results from interlaboratory evaluations, Metrologia 45, 257-264 (2008).

(5.) B. N. Taylor, W. H. Parker, and D. N. Langenberg, Determination of e / h, Using Macroscopic Quantum Phase Coherence in Superconductors: Implications for Quantum Electrodynamics and the Fundamental Physical Constants, Review of Modern Physics 41, 375-496 (1969).

(6.) R. N. Kacker, A. B. Forbes, R. Kessel, and K. Sommer, Bayesian posterior predictive p-value of statistical consistency in interlaboratory evaluations, Metrologia 45, 512-523 (2008).

(7.) A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis, 2nd ed., Chapman & Hall (2004).

(8.) P. H. Mohr and B. N. Taylor, CODATA recommended values of the fundamental physical constants: 1998 Reviews of Modern Physics 72, 351-495 (2000), (current version available at http://physics.nist.gov/cuu/Constants/index.html).

(9.) M. G. Cox, The evaluation of key comparison data, Metrologia 39, 589-595 (2002), (these guidelines were developed by the BIPM Director's Advisory Group on Uncertainties).

(10.) The BIPM key comparison database (2010), http://kcdb.bipm.org/.

(11.) R. Wellum, A. Verbruggen, and R. Kessel, A new evaluation of the half-life of 241Pu, Journal of Analytical Atomic Spectrometry 24, 801-807 (2009).

(12.) R. U. Datla, R. Kessel, A. W. Smith, R. N. Kacker, and D. B. Pollock, Uncertainty analysis of remote sensing optical sensor data: guiding principles to achieve metrological consistency, International Journal of Remote Sensing 31, 867-880 (2010).

(13.) R. Kessel, R. N. Kacker, and K. Sommer, Proposal for combining results from multiple evaluations of the same measurand, submitted for publication (2009).

Raghu N. Kacker, Rudiger Kessel,

National Institute of Standards and Technology, Gaithersburg, MD 20899, USA

and

Klaus-Dieter Sommer

Physikalisch-Technische Bundesanstalt, D-38116 Braunschweig, Germany

raghu.kacker@nist.gov

ruediger.kessel@nist.gov

klaus-dieter.sommer@ptb.de

About the Authors: Raghu N. Kacker is a mathematical statistician in the Information Technology Laboratory of the National Institute of Standards and Technology, Gaithersburg, MD 20899, USA.

Rudiger Kessel is a guest researcher in the Information Technology Laboratory of the National Institute of Standards and Technology, Gaithersburg, MD 20899, USA.

Klaus-Dieter Sommer is director of the Chemical Physics and Explosion Protection Division of the National Metrology Institute of Germany, Physikalisch-Technische Bundesanstalt, D-38116 Braunschweig, Germany.

The National Institute of Standards and Technology is an agency of the U.S. Department of Commerce.

Printer friendly Cite/link Email Feedback | |

Author: | Kacker, Raghu N.; Kessel, Rudiger; Sommer, Klaus-Dieter |
---|---|

Publication: | Journal of Research of the National Institute of Standards and Technology |

Article Type: | Report |

Geographic Code: | 4EUGE |

Date: | Nov 1, 2010 |

Words: | 5793 |

Previous Article: | A review of fatigue crack growth for pipeline steels exposed to hydrogen. |

Next Article: | Variances of plane parameters fitted to range data. |

Topics: |