Printer Friendly

Commentary: are we splitting hairs over splitting DRGs?

In my prepublication review of the article by Averill et al., I had argued that data-splitting is an aggressive technique that can result in an improvement (i.e., smaller weighted variance or coefficient of variation) even though the split makes no medical sense. I had suggested that the "baseline" for computing improvements in the data should be something other than zero, and I had recommended that the authors simulate the results that would be obtained from random splitting of their data.

In response, Averill performed this simulation for a typical DRG. Although he preferred not to add this discussion to the article, on the grounds that it might confuse the readers in terms of the purpose of the simulation versus the more conventional F-test, I found his simulation results interesting and thought-provoking.

In particular, one might ask why the mean reduction in variance (RIV) of costs declines with the pattern that is exhibited by his simulation. Can we discover a formula for the relationship between the number of observations to be split, the number of splits, and the expected RIV? In addition, I would like to point out the association between RIV and the F-test, so that readers will not be confused as Averill fears.

In order to avoid unnecessarily complicated notation, the discussion of these issues presented here assumes that a population of N observations is to be split into two mutually exclusive and exhaustive groups, X and Y, with [N.sub.x] and [N.sub.y] observations, respectively. However, I will show how the results generalize to any number of splits. The benefits of splitting the data will be assessed by the extent to which the population variance, [[sigma].sup.2], is greater than the arithmetic weighted average of the group variances: (1)[Mathematical Expression Omitted] An equivalent way to write the inequality (1) is (2) SSQ > [SSQ.sub.x] + [SSQ.sub.y], where SSQ stands for the sum of squared deviations. Thus, the formula for reduction in variance (RIV) is (3) RIV = (SSQ - [SSQ.sub.x] - [SSQ.sub.y])/SSQ. The denominator of this formula simply is a way to express RIV as a proportion of total variance. I will ignore it, unless noted, and look only at the numerator.


In my review, I had argued that random splitting of the data cannot produce a negative RIV. This is equivalent to arguing that SSQ - [SSQ.sub.x] - [SSQ.sub.y] [greater than or equal to] 0 (see Equation 3). By expanding the respective sums of squares, we obtain [([M.sub.x] - [M.sub.y]).sup.2] [greater than or equal to] 0, where [M.sub.x] and means of the two groups into which the population has been split. Thus, it is obvious that random data-splitting cannot produce negative RIV. In fact, the only way that random data-splitting can avoid reducing variance is for the means of the split groups to be equal. This would f there were no sampling variance, that is, if each time we occur if there were no sampling varience, that is, if each time we randomly split the data, the means of the split groups were exactly equal and therefore equal to the population mean. Otherwise, random data-splitting must reduce variance.


Averill's simulation shows that the expected RIV from random data-splitting is, in general, small and that it declines as the size of the population increases. What is the formula that governs these results? Focusing again on the numerator of Equation 3 and expanding the sums of squares, (4) [Mathematical Expression Omitted] where the population mean is denoted by M. Now, take the expected value of RIV: (5) [Mathematical Expression Omitted]

The first term in Equation 5 is just a number, so it is unaffected by the expected value operator. However, the squared group means are random variables, and therefore we must consider their expected values. According to well-known statistical principles, the expected value of the square of [M.sub.x] is equal to [Mathematical Expression Omitted] + [[E([M.sub.x])].sup.2. Th expression for the expected value of [Mathematical Expression Omitted].

Substitute these expressions into Equation 5 to obtain: (6) [Mathematical Expression Omitted]

Now we are ready to consider the implications of random data-splitting, under which E([M.sub.x]) = E([M.sub.y]) = M. Substitute M into Equation 6 and note that -[M.sup.2]N + [M.sup.2][N.sub.x] + [M.sup.2][N.sub.y] = 0. Therefore, Equation 6 can be simplified to: (7) [Mathematical Expression Omitted]

Next, we rely on the expressions for the group variances, for example, [Mathematical Expression Omitted]. Substitute this and a similar expression for [Mathematical Expression Omitted] into Equation 7 to yield: (8) E(RIV) = [[sigma].sup.2](N - [N.sub.x])/(N - 1) + [[sigma].sup.2](N - [N.sub.y])/(N - 1) = [[sig

Finally, we can bring the denominator of Equation 3 into play. Since SSQ = [[sigma].sup.2]N, E(RIV) can be written as a ratio: E(RIV) = 1/(N - 1). Equation 8 can be generalized to any number of groups. For example, suppose that the population is split into K groups. The general formula for E(RIV) is (9) [Mathematical Expression Omitted]

As a ratio, the generalized E(RIV) is equal to (K - 1)/(N - 1). This is the formula that governs Averill's simulation. With 25 observations randomly split into four groups, E(RIV) is 3/24 = .125. If 100 observations were randomly split into four groups, E(RIV) = 3/99 = .0303. It is interesting that Averill's simulated RIVs, with mean values of .132 and .031 in ten trials, so closely approximate the theoretical expected values.


Averill argued that the F-statistic offers a more familiar method than simulations for demonstrating the significance of data-splitting. It is, nevertheless, of interest to understand the relation between E(RIV) and the F-statistic. The F-statistic is often represented in terms of sums of squares "between" and "within" groups: (10) [Mathematical Expression Omitted]

We can also look at RIV (in ratio form) as the sum of squares between groups divided by the total SSQ. Substitute RIV into Equation 10, and also use the identity that the total SSQ = SSQ within groups + SSQ between groups to obtain: (11) [Mathematical Expression Omitted]

Finally, take the expected value of Equation 1 1 under the assumption of random data-splitting, using the result (from above) that E(RIV) = (K - 1)/(N - 1). The expected F-statistic, under this assumption, is 1. Therefore, the F-statistic adjusts for the expected reduction in variance under random sampling. Averill et al. are correct in asserting that significant F-statistics demonstrate that the reported results did not occur by chance at conventional significance levels.


I have shown that the expected reduction in variance is positive, even when a population is randomly split into several groups. I derived a formula for E(RIV) and showed how this was related to the conventional F-statistic. The following advice is relevant for those who want to split data in order to reduce variance:

* Splitting a small population into many groups is the most likely recipe for obtaining large reported RIVs by chance. On the other hand, splitting a large population into a few groups will not lead to large RIVs by chance.

* Reporting RIVs as "large" or "small" is an incomplete and potentially misleading description of the findings.

* The conventional F-statistic is a test for the significance of reported RIVs.

Averill et al. split typically large populations into a few groups - four - and reported the F-statistics for their results. Their work was acceptable for publication.

In closing, I wish to highlight some remaining differences between the authors and me regarding the purpose of the prospective payment system (PPS).

The virtue of PPS is that it uncouples prices from the costs of individual hospitals. This idea, which lies at the heart of PPS, is that hospitals will strive to reduce their costs below the level of these fixed prices. The result will be greater efficiency in using scarce resources paid by patients and Medicare. The consequences of proposals to adjust PPS payments for severity differences should be judged by their effect on efficiency, and through this effect, on how they affect patients and the Medicare program.


Averill, R. F. "A Study of the Relationship between Severity of Illness and Hospital Cost in New Jersey Hospitals." Health Services Research, this issue.

Roger Feldman, Ph.D. is Professor of Health Services Research and Economics, Institute for Health Services Research, University of Minnesota, 420 Delaware Street, S.E., Box 729, Minneapolis, MN 55455.
COPYRIGHT 1992 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1992 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Dialogue; response to article by Richard F. Averill et. al. in this issue, p. 609; diagnosis related groups
Author:Feldman, Roger
Publication:Health Services Research
Date:Dec 1, 1992
Previous Article:Critical reaction to "A Study of the Relationship Between Severity of Illness and Hospital Cost in New Jersey Hospitals." (response to article by...
Next Article:Choosing quality of care measures based on the expected impact of improved care on health.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters