Printer Friendly

A note on some aspects of Pitman nearness.

ABSTRACT. -- Peddada (1985, 1986) and Berry (1986) have given sufficient conditions for an estimator that has smaller mean squared error or smaller mean absolute error than a competing estimator to be Pitman nearer. This note corrects a technical error and improves Peddada's and Berry's results through the use of the Cantelli-Frechet-Uspensky inequality. Finally, we note some variations on the definition of Pitman nearer that have appeared in the literature and the consequences of the differences in these definitions. Key words: mean square error; mean absolute error; Cantelli-Frechet-Uspensky inequality; Gauss inequalities; unimodality.

**********

Peddada (1985) discussed the relationship among minimum mean square error, minimum absolute error, and Pitman nearness. For a given loss function L(T,[theta]), he defined an estimator [T.sub.1] to be closer to [theta] than an estimator [T.sub.2] in the Pitman nearer (PN) sense if P[L([T.sub.1],[theta]) < L([T.sub.2],[theta])] > 1/2. In his Theorem 2.2, Peddada provided sufficient conditions on the difference U = L([T.sub.1], [theta]) - L([T.sub.2],[theta]) to imply that [T.sub.1] is closer to [theta] than [T.sub.2] in the PN sense. These included the following moment conditions: E(U) = [u.sub.o] < -2.67 and E(U - [u.sub.o])[.sup.j] < j! for j = 1,2,....

Berry (1986) used the Bienayme-Chebyshev inequality to prove that weaker conditions on the moments suffice in Peddada's Theorem 2.2. Specifically, Berry's conditions were that [u.sub.o] < -2.67 and [[sigma].sup.2] = Var(U) < 1/2(2.67)[.sup.2].

We point out a slight technical flaw in the arguments of both Berry and Peddada and provide a new, less restrictive, sufficient condition for the PN criterion to hold based on the Cantelli-Frechet-Uspensky inequality. We also discuss a potential further improvement via the assumption of unimodality and a Gauss-type inequality. Finally, we consider variations on the definition of PN that have appeared in the literature and some consequences of these variations.

A NEW SUFFICIENT CONDITION FOR PITMAN NEARNESS

Berry's proof relied on the following implication: P(|U - [u.sub.0]| < [2.sup.1/2] [sigma]) [greater than or equal to] 1/2 = > P(U < [u.sub.0] + [2.sup.1/2] [sigma]) > 1/2. This implication need not hold. For example, suppose U - [u.sub.0] has an absolutely continuous distribution function with support [-[2.sup.1/2] [sigma], + [infinity]) and that P(|U - [u.sub.0]| < [2.sup.1/2] [sigma]) = 1/2. Then we must have P(U - [u.sub.0] < [2.sup.1/2] [sigma]) = 1/2 as well and the implication above does not hold. Of course, this problem can be alleviated by placing conditions on the distribution of U or by modifying the PN criterion by requiring only P(U < 0) [greater than or equal to] 1/2. This version of the PN criterion may be found, for example, in Mood et al. (1974), where L is taken to be absolute value loss. Indeed, in Peddada and Khattree (1986), the criterion is taken to be P(U [less than or equal to] 0) [greater than or equal to] 1/2. We discuss variations of the definition of PN in a following section.

Peddada (1986) established an even weaker condition on the moments of U as he showed that the moments need only satisfy ([u.sub.0]/[sigma]) < -[2.sup.1/2]. However, again, we must either assume that P(U < -[2.sup.1/2] [sigma]) > 0 or suitably modify the PN criterion.

We now establish a sufficient condition on the moments in Peddada's Theorem 2.2, which are weaker than those presented by Berry and Peddada. We shall use the PN criterion in the sense of Mood et al. (1974) mentioned above, but under general loss. The condition follows from the Cantelli-Frechet-Uspensky (CFU) inequality (Frechet, 1950:137; Uspensky, 1937:198). For a random variable X with mean and variance [mu] and [[tau].sup.2], respectively, and a constant k, the CFU inequality is P(X - [mu] [greater than or equal to] k[tau]) [less than or equal to] 1/(1 + [k.sup.2]).

Theorem. -- Let [T.sub.1] and [T.sub.2] be estimators of [theta]. Let L be a loss function, U = L([T.sub.1], [theta]) - L([T.sub.2], [theta]), E(U) = [u.sub.0], and Var(U) = [[sigma].sup.2]. If ([u.sub.0]/[sigma]) < -1, then [T.sub.1] is closer to [theta] than [T.sub.2] in the PN sense.

Proof: By the CFU inequality, P(U < [u.sub.0] + [sigma]) [greater than or equal to] 1/2. If ([u.sub.0]/[sigma]) [less than or equal to] -1, then P(U < 0) [greater than or equal to] 1/2 and the theorem is proved.

Note that the condition ([u.sub.0]/[sigma]) < -1 is a considerable relaxation of the condition ([u.sub.0]/[sigma]) < -[2.sup.1/2], which was the best previous sufficient condition. Lee (1990) obtained a similar result.

A SUFFICIENT CONDITION FOR PITMAN NEARNESS BASED ON UNIMODALITY

Berry's improvement on Peddada's sufficient condition was derived using Chebyshev's inequality. Our improvement on Berry's result has been obtained via a tighter probability inequality. We now consider what improvement may be had by assuming that U is unimodal, and employing a version of Gauss's inequality.

Under the assumption of unimodality, Chebyshev's inequality may be sharpened (though not uniformly). Probability inequalities that incorporate the assumption of unimodality are known as Gauss inequalities. We shall use the following Gauss inequality, which has been formulated by Vysochanskii and Petunin (1979). Let X be a unimodal random variable with mean [mu] and variance [[tau].sup.2]. Then for all k > 0,

P(|X - [mu]| > k[tau]) [less than or equal to] max {[[4 - [k.sup.2]]/[3[k.sup.2]]], [4/[9[k.sup.2]]]}.

Now, let U be defined as before and suppose that it is unimodal. Set k = d[sigma], for d > 0. In order that P(U < [u.sub.0] + [d.sub.0][sigma]) [greater than or equal to] 1/2, we must have [d.sub.0] such that

max {[[4 - [k.sup.2]]/[3[k.sup.2]]], [4/[9[k.sup.2]]]} [less than or equal to] 1/2.

Unfortunately, this inequality implies [d.sub.0] [greater than or equal to] (8/5)[.sup.1/2] > 1. Thus, we obtain no improvement over the result obtained using the CFU inequality.

VARIATIONS ON THE DEFINITION OF PITMAN NEARNESS

Let [T.sub.1] and [T.sub.2] be estimators of [theta]. Pitman's original criterion, established in his 1937 paper, is that [T.sub.1] is closer to [theta] than [T.sub.2] if P(|[T.sub.1] - [theta]| <| [T.sub.2] - [theta]|) > 1/2 for all [theta].

Peddada (1985) and Rao et al. (1986), among others, used a generalized version of the PN criterion in which a general loss function is allowed. For a loss function L, the generalized PN criterion is that, for all [theta], P[L([T.sub.1],[theta]) < L([T.sub.2],[theta])] > 1/2. This generalization affords the treatment of multiparameter estimation problems.

The generalized criterion may be further modified by changing one or both of the inequalities to allow equality. The four possible variations are listed below:

PN1: P[L([T.sub.1],[theta]) < L([T.sub.2],[theta])] > 1/2

PN2: P[L([T.sub.1],[theta]) [less than or equal to] L([T.sub.2],[theta])] > 1/2

PN3: P[L([T.sub.1],[theta]) [less than or equal to] L([T.sub.2],[theta])] [greater than or equal to] 1/2

PN4: P[L([T.sub.1],[theta]) < L([T.sub.2],[theta])] [greater than or equal to] 1/2.

A cursory search of the literature reveals that PN1, PN3, and PN4 have been employed. In some cases, the same author used different versions in different papers. For example, Peddada and Khattree (1986) used PN3, whereas Khattree (1987) used PN4. Mood et al. (1974) used PN4 in their widely read text on mathematical statistics.

Our purpose in this section is to note that different results may be obtained using different versions of the PN criterion. We illustrate this fact with an example.

Let X be a unimodal random variable with support [a, b] and variance [[sigma].sup.2]. Upper bounds on the variance of such random variables have been established by several authors; see Seaman and Odell (1988) for an overview. Seaman et al. (1987) considered the use of such bounds in small sample variance estimation. For example, when sampling from a distribution that is known to be symmetric unimodal, it can be shown that [[sigma].sup.2] [less than or equal to] (b - a)[.sup.2]/12. The U-statistic for estimating variance in this case is the usual unbiased sample variance, [S.sup.2]. By truncating [S.sup.2] at the upper bound, the mean square error (MSE) may be reduced for small (n < 10) sample sizes. If the criterion is minimum MSE, then the truncated estimator is superior. One may ask whether the truncated estimator, call it [S.sub.T.sup.2], is Pitman nearer to [[sigma].sup.2] than [S.sup.2]. The answer depends on which version of the PN criterion is employed.

Consider the following illustration. We have simulated 1000 samples of size 4 from a beta distribution with shape parameters three and three. This distribution is symmetric unimodal and therefore has variance not exceeding 1/12. We find that the ratio of the MSE for [S.sub.T.sup.2] to the MSE of [S.sup.2] is approximately .910. However, using PN1 or PN4, we have P(|[S.sub.T.sup.2] - [[sigma].sup.2]| < |[S.sup.2] - [[sigma].sup.2]|) [approximately equal to] .047, so that [S.sup.2] is PN to [[sigma].sup.2] than [S.sub.T.sup.2]. By contrast, using PN2 or PN3, we have P(|[S.sub.T.sup.2] - [[sigma].sup.2]| [less than or equal to] |[S.sup.2] - [[sigma].sup.2]|) = 1, so that one concludes that [S.sub.T.sup.2] is PN to [[sigma].sup.2] than [S.sup.2]. The two versions of the PN criterion, therefore, lead to opposite conclusions.

LITERATURE CITED

Berry, J. C. 1986. On Pitman nearness. Amer. Stat., 40:257.

Frechet, M. 1950. Generalites sur les probabilites. Variables aleatoires. Gauthier-Villars, Paris.

Khattree, R. 1987. On comparison of estimates of dispersion using generalized Pitman nearness criterion. Commun. Statist.-Theor. Meth., 16:263-274.

Lee, C. 1990. On the characterization of Pitman nearness. Stat. Prob. Letters, in press.

Mood, A. M., F. A. Graybill, and D. C. Boes. 1974. Introduction to the theory of statistics. McGraw-Hill, New York (3rd ed.), xvi + 564 pp.

Peddada, S. D. 1985. A short note on Pitman's measure of nearness. Amer. Stat., 39:298-299.

_____. 1986. Reply to Berry's letter to the editor. Amer. Stat., 40:257.

Peddada, S. D., and R. Khattree. 1986. On Pitman nearness and variance of estimators. Commun. Statist.-Theor. Meth., 15:3005-3017.

Pitman, E. J. C. 1937. The closest estimates of statistical parameters. Proc. Cambridge Phil. Soc., 33:212-222.

Rao, C. R., J. P. Keating, and R. L. Mason. 1986. The Pitman nearness criterion and its determination. Commun. Statist.-Theor. Meth., 15:3173-3191.

Seaman, J. W., and P. L. Odell. 1988. Variance upper bounds. Pp. 480-484, in Encyclopedia of statistical sciences (S. Kotz and N. L. Johnson, eds.), John Wiley and Sons, New York, 9:xxi + 1-762.

Seaman, J. W., P. L. Odell, and D. M. Young. 1987. Improving small sample variance estimators for bounded random variables. Industrial Math., 37:65-75.

Uspensky, J. V. 1937. Introduction to mathematical probability. McGraw-Hill, New York, ix + 411 pp.

Vysochanskii, D. F., and Yu. I. Petunin. 1979. On a Gauss inequality for unimodal distributions. Theor. Probab. Appl., 27:359-361.

JOHN W. SEAMAN. JR, AND DEAN M. YOUNG

Department of Information Systems, Baylor University, Waco, Texas 76798-8005
COPYRIGHT 1990 Texas Academy of Science
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1990 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Seaman, John W., Jr.; Young, Dean M.
Publication:The Texas Journal of Science
Geographic Code:1U7TX
Date:Feb 1, 1990
Words:1992
Previous Article:Observations on obtaining white-tailed deer fawns for experimental purposes.
Next Article:Phyllodont (Paralbulinae) fish toothplates from the Lower Cretaceous Glen Rose Formation of central Texas.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters