# A comparison of six test statistics for detecting multivariate non-normality which utilize the multivariate squared-radii statistic.

Abstract. -- This study presents tabulated emprically-derived critical values for Hawkins' test for non-normality, and compares the power of this test to five other test statistics designed to detect multivariate non-normality, all of which are functions of the multivariate squared-radii statistic. The power comparison has been accomplished using a Monte Carlo simulation with two sample sizes, two observation dimensions, and ten multivariate non-normal distributions. Among the six test statistics considered in the present study, the one proposed by Hawkins (1981) has proven to be the best omnibus test statistic for detecting multivariate non-normality. Empirically calculated critical values for Hawkin's test statistic for detecting multivariate non-normality are given as an appendix.

**********

Strategies for testing the hypothesis of multivariate normality of a population from a set of sampled multivariate observations are numerous in the statistical literature. To date, over forty different test statistics have been recommended for this purpose. The interested reader is referred to thorough reviews by Gnanadesikan (1977), Mardia (1980), Koziol (1986), and Looney (1986). Attempts at detecting deviations from multivariate normality, using sample evidence from a set of multivariate observations, have typically employed one of the following strategies: (1) apply univariate techniques to detect marginal univariate non-normality for each dimension, (2) utilize multivariate techniques to detect joint non-normality, or (3) employ a univariate summary statistic to test for multivariate non-normality. Given a set of p-dimensional random variables [X.sub.1], [X.sub.2],..., [X.sub.n], the statistic most often utilized in testing for multivariate normality after the manner of strategy (3) is the squared sample radii statistic defined by

[D.sub.i] = ([X.sub.i] - [bar.X])'[S.sup.-1] ([X.sub.i] - [bar.X])) (1)

where

[bar.X] = [1/n][n.summation over (i=1)][X.sub.i] and S = [n.summation over (i=1)][[([X.sub.i] - [bar.X])([X.sub.i] - [bar.X])']/[n-1]].

Several of the techniques designed to test the multivariate normality hypothesis have employed some variation of the multivariate squared radii statistic defined in (1). For example, Healy (1968) suggested that the tables in Wilk et al. (1962) be used to construct a [chi square] plot so that the multivariate normality hypothesis can be tested visually when p = 2. Malkovich & Afiffi (1973) proposed applying the Cramer-Von Mises and the Kolmogorov-Smirnov statistics to test the hypothesis that the values of [D.sub.i] have an approximate [chi square](p) distribution. For cases where p [greater than or equal to] 2, Small (1978) proposed plotting the order statistics of [D.sub.i] against the expected order statistics from a multiple of a beta distribution since, under the hypothesis of multivariate normality, the marginal distribution of [D.sub.i] is proportional to

[[[(n - 1) - 1][.sup.2]]/[n - 1]] Beta {[p/2],[[(n - 1) - p - 1]/2]},

where Beta (a, b) denotes a beta distribution with shape parameters a and b.

Another test procedure utilizing the squared-radii statistic has been formulated by Hawkins (1981) for simultaneously testing the assumption of multivariate normality of two or more sets of multivariate observations. He has proposed transforming the squared radii statistics into statistics with approximate F-distributions, assuming that multivariate normality of all data sets holds. He has shown that, under the assumption of multivariate normality, the tail probabilities will be distributed uniformly on the open unit interval. Hawkins suggested using the Anderson-Darling test statistic to test the assumption of uniformity for the transformed tail probabilities. Moore & Stubblebine (1981) proposed a multivariate normality test statistic which also is based upon the squared-radii statistic. The test statistic is of the form

[chi square] = [1/[nk]] [n.summation over (j=1)](k[O.sub.j] - n)[.sup.2]

which has an approximate distribution of [chi square](q), k - 1 [less than or equal to] q [less than or equal to] k, where [O.sub.i] is the number of [D.sub.i], i = 1,2,...,n, whose values are in cell j, j = 1,2,...,k. One advantage of this statistic is that approximate critical values are easily obtained.

Fattorini (1982) proposed two statistics based on [D.sub.i] that may be used to test for multivariate non-normality. The first statistic is the average relative discrepancy among the sample order statistics and the expected order statistics from a multiple of the beta distribution. The second statistic utilizes the Theil index to measure the goodness of fit between the sample order statistics and the expected order statistics from the beta distribution.

Koziol (1982) derived the asymptotic distribution of the Cramer-Von Mises type test of Malkovich and Afifi and also derived critical values via a Monte Carlo simulation for various sample sizes, dimensions, and significance levels. However, these critical values are not reported in the paper.

Royston (1983) formulated a test for multivariate normality based on the squared-radii statistic in which, assuming the hypothesis of multivariate normality, the squared-radii are transformed to near normality and then summed to form an approximate [chi square] random variable.

Booker et al. (1984) noted that the [chi square](p) reference distribution used by Malkovich & Afifi (1973) in their Kolmogorov-Smirnov type test for multivariate normality could be improved by applying a multiple of the beta distribution as the reference distribution. However, the power of this test was examined for only the limited case of p = 2.

Paulson et al. (1987) proposed two tests for multivariate normality utilizing the squared-radii statistic. They find empirical critical values for dimensions one through five for an Anderson-Darling type statistic and they also formulate a test based on the Kullback divergence statistic.

Tsai & Koziol (1988) suggested using the Pearson correlation coefficient as a measure of the strength of the relationship between the order statistic for the squared-radii, [D.sub.i], and the approximate expected order statistics of the [D.sub.i] when assuming multivariate normality of the underlying population.

This paper compares the relative powers of six test statistics for detecting multivariate non-normality, all of which are functions of the squared-radii statistic. The power comparison is accomplished using a Monte Carlo simulation encompassing a variety of multivariate non-normal distributions. Additionally, a table of empirically-derived critical values for Hawkins' test statistic for multivariate non-normality is constructed. Tabled critical values can be found in an appendix whilst a description of the simulation used to generate those critical values can be found in the next section along with a brief discussion of each of the six test statistics that are based on the squared-radii statistic and that have been compared in this study. Then, in Section 3, a brief description is given of the Monte Carlo simulation used for the power comparison and the results of that power comparison is presented. Finally, in Section 4, comments on the simulation results are given and recommendations are made regarding the choice of a test statistic.

Six Test Statistics for Detection of Multivariate Non-Normality Based upon the Squared-Radii Statistic

The goal of this study is the comparison of the powers of six test statistics designed to detect multivariate non-normality. Each of these statistics is a function of the squared-radii statistic defined in (1). For completeness a brief description of these test statistics is presented.

Hawkins Test Statistic (HAW)

Hawkins (1981) proposed a statistic for detecting multivariate non-normality which is a function of the squared-radii statistic, [D.sub.i]. His procedure, which may be applied to observations from one or more populations simultaneously, is based upon a transformation of the squared-radii into statistics which have exact F-distributions under the assumption that the underlying populations are multivariate normal. If this assumption is true, the tail probabilities of the proposed statistic are distributed uniformly on the interval (0,1). The Anderson-Darling methodology is then employed to assess the uniformity of the tail probabilities. Hawkins' test statistic for detecting multivariate non-normality of a single population may be described as follows. Let [D.sub.i] be defined as in (1) and let

[F.sub.i] = [(n - p - 1)n[D.sub.i]]/[p[(n-1)[.sup.2] - n[D.sub.i]]].

Let [A.sub.i] [equivalent to] P[F > [F.sub.i]] denote the tail area of a random variable with an F-distribution having p and (n - p - 1) degrees of freedom. Hawkins' test statistic for detecting multivariate non-normality is based on the n order statistics [A.sub.(1)] [less than or equal to] [A.sub.(2)] [less than or equal to] ... [less than or equal to] [A.sub.(n)] of the [A.sub.i]'s and may be written as

HAW = -n - [1/n][n.summation over (j=1)] (2j-1)[log [A.sub.(j)] + log(1 - [A.sub.(n-j-1)])].

Large values of HAW indicate a departure from the multivariate normal model. Note that this is nothing more than an application of the Anderson-Darling statistic to test uniformity of the [A.sub.i] values. Empirical critical values for Hawkins test statistic have been obtained via a Monte Carlo simulation that is described in the next section.

Empirically-derived Critical Values for Hawkins' Test Statistic

For each combination of sample size n = 10, 20, 30, 40, 50, 75, 100 and dimension p = 2, 3, 4, 5, 6, 8, 10, 12, and 15, four sets of 5,000 sample observations of the statistic HAW have been generated from the p-dimensional standard normal distribution (i.e. a p-variate normal distribution with mean vector 0 and covariance matrix [SIGMA] = I). Notice that it will be sufficient to use the p-variate standard normal since the [V.sub.i] are invariant under linear transformations. For each combination of n and p, each set of 5,000 observations has been ordered and the appropriate sample quantile selected to estimate the critical value. The critical values that have been tabulated in the appendix, are actually the averages of the four sample quantiles for significance levels of 0.1, 0.05, 0.025, 0.01 and 0.005.

Computations have been performed on an IBM 4381 computer under the VM/CMS operating system in the Casey Computer Center at Baylor University. The code has been written in the SAS/IML software.

One further observation about the empirically generated critical values is worth mentioning here. A comparison of the asymptotic, Anderson-Darling critical values (recommended by Hawkins) and the empirically-derived critical values found in the appendix suggests that the asymptotic critical values may be quite conservative when applied to Hawkins' test statistic. Consider, for example, the case where n = 40, p = 5, and a significance level of .10 is adopted. Under these experimental conditions, the asymptotic critical value, which is independent of p, is 1.933. A glance at the tabled empirical values in the appendix, however, reveals that the p-value corresponding to 1.933 is actually less than 0.005.

The Paulson-Roohan-Sullo Test Statistic (PRS)

The PRS test statistic was formulated by Paulson et al. (1987). This test statistic for detecting multivariate non-normality, like Hawkins' statistic, is based on the Anderson-Darling formulation. The PRS statistic may be expressed as

PRS = -n - [1/n][n.summation over (j=1)](2j-1)(log G([D.sub.(j)]) + log[1 - G([D.sub.(n-j-1)])])

where G(*) is the cumulative distribution of a [chi square](p) random variable and [D.sub.(j)] is the jth order statistic of the squared-radii statistic defined in (1). Note that Hawkins' test procedure differs from the PRS test procedure in that the latter statistic adopts a [chi square]-approximation for the [D.sub.i]'s while the former test statistic utilizes a transformation of the [D.sub.i]'s, resulting in tail probabilities with exact uniform distributions. Empirical critical values for the PRS statistic can be found in Paulson et al. (1987).

The Tsai-Koziol Test Statistic (TK)

The TK test statistic may be described as follows. Let [Q.sub.1] [less than or equal to] [Q.sub.2] [less than or equal to] ... [less than or equal to] [Q.sub.n] denote the expected order statistics in a sample of size n from a [chi square]-distribution with p degrees of freedom. The Tsai-Koziol statistic, then, is of the form

TK = [[[summation].sub.i=1.sup.n]([D.sub.(i)] - [bar.D])([Q.sub.i] - [bar.Q])]/[[[[summation].sub.i=1.sup.n]([D.sub.(i)] - [bar.D])[.sup.2]][.sup.1/2][[[summation].sub.i=1.sup.n]([Q.sub.(i)] - [bar.Q])[.sup.2]][.sup.1/2]]

where [bar.D] = [1/n][n.summation over (i=1)][D.sub.i] and [bar.Q] = [1/n][n.summation over (i=1)][Q.sub.i]. Note that this statistic is nothing more than the Pearson correlation estimate for the correlation between the expected and empirical order statistics for the multivariate squared-radii statistic. The null hypothesis of multivariate normality is rejected for sufficiently small TK values. A selected group of sample critical values may be found in Tsai & Koziol (1988).

The Extended Malkovich and Afifi Test Statistic (EMA)

Malkovich & Afifi (1973) proposed another test statistic for detecting multivariate non-normality which is a function of the squared-radii statistic. Their statistic is essentially an extension of the Lilliefors statistic for testing univariate normality. The Malkovich and Afifi statistic is of the form

EMA = [sup.[z]]|[F.sub.n](z) - G(z)|

where [F.sub.n](z) is the sample cumulative distribution function of the squared-radii statistic and G(z) is the cumulative distribution function of a [chi square](p) random variable. Concerning the performance of their test statistic, Malkovich and Afifi state "... a better approximation than [chi square](p) may be appropriate as the hypothetical distribution of [D.sub.i]...." Jennings et al. (1990) applied a multiple of a beta random variable as an approximation to the distribution of [D.sub.i]. This statistic is of the form

EMA* = [sup.[z]]|[F.sub.n](z) - G* (z)|

where G*(z) is taken to be a scaled Beta distribution. Note that this formulation is an extension of the statistic proposed by Booker et al. (1984) in the case where the dimensionality of the observation vectors is greater than two. Empirical critical values for this test statistic have been generated by Jennings et al. (1990).

The Cramer-Von Mises Test Statistic (CM)

Koziol (1982) derived a test statistic for detecting multivariate non-normality which is based on the Cramer-Von Mises distance measure between two distribution functions. This distance measure is of the form

[[infinity].[integral].[0]][F(z) - G(z)][.sup.2] d[G.sub.p] (z)

where F and G are cumulative distribution functions. The Cramer-Von Mises test statistic formulated by Koziol (1982) is expressed as

CM = [1/[12n]] + [n.summation over (i=1)][G([D.sub.(i)]) - [[2j-1]/[2n]]][.sup.2]

where G is the cumulative distribution function of a [chi square] random variable with p degrees of freedom. Unfortunately, Koziol (1982) includes only a limited number of empirical critical values.

The Percent Mean Difference Test Statistic (PME)

Fattorini (1982) suggested using the percent mean difference of the estimated quantiles of [D.sub.i] from the approximated expected quantiles of the squared-radii statistic assuming multivariate normality. The approximate expected quantiles of [D.sub.i] are calculated as functions of approximate beta quantiles of order

[p.sub.i] = [i - (a - 1)/2a]/[n - [(a - 1)/[2a]] - [(b - 1)/[2b]] + 1],

denoted by [q.sub.i], where a = p/2 and b = (n - p - 1)/2. The expected quantiles for [D.sub.i] may then be approximated by [v.sub.i] = [(n - 1)[.sup.2]/n][q.sub.i]. The percent mean difference in the estimated quantiles, [D.sub.(i)], and the approximate expected quantiles of [D.sub.i], [v.sub.i], is then expressed as

PME = [1/n][n.summation over (i=1)][|[D.sub.(i)] - [v.sub.i]|/[v.sub.i]].

Note that large values of PME indicate evidence of multivariate non-normality. Empirically derived critical values for selected sample sizes with dimensions 2 through 6 are given in Fattorini (1982).

The Simulation for Power Comparisons

To evaluate the relative powers of the six test statistics for detecting multivariate non-normality which are functions of the squared-radii statistic defined in (1), we conducted a Monte Carlo simulation using SAS/IML under the VM/CMS operating system on an IBM 4381. The simulation was performed in the following manner. Sets of ten thousand random vectors for sample sizes n = 20 and n = 50 from various nonnormal multivariate populations of dimensions p = 2 and p = 6 were generated. We evaluated each of the six test statistics using all possible configurations of sample size, dimension, and form of the nonnormal distribution.

The power study simulation made extensive use of the r-normed exponential distribution family which consists of symmetric, multivariate distributions. The reader may consult Goodman & Kotz (1973) or Chhikara & Odell (1973) for a complete discussion of this family. This study used multivariate r-normed exponential distributions with r = 1, 1.1, 1.2, 1.3, 1.4, 1.5, and 10. Other nonnormal distributions used in this study include four p-dimensional distributions with marginal [chi square] variables having 1, 2, 3, and 4 degrees of freedom; and a p-dimensional distribution with marginal uniform variates.

The non-normality of these distributions was assessed using multivariate measures of skewness and kurtosis formulated by Mardia (1970). The multivariate skewness measure is

[[beta].sub.1,p] = [p.summation over (i,j,k=1)] [p.summation over (e,f,g=1)] [[sigma].sup.ie][[sigma].sup.if][[sigma].sup.kg][[mu].sub.lll.sup.(ijk)][[mu].sub.lll.sup.(efg)],

and the multivariate kurtosis measure is

[[beta].sub.2,p] = [p.summation over (i,j=1)][p.summation over (k,l=1)][[sigma].sup.ij][[sigma].sup.kl][[sigma].sup.kg][[mu].sub.llll.sup.(ijkl)],

where

[[mu].sub.rstu.sup.(ijkl)] = E[([X.sub.i] - [[mu].sub.i])[.sup.r]([X.sub.j] - [[mu].sub.j])[.sup.s]([X.sub.k] - [[mu].sub.k])[.sup.t]([X.sub.1] - [[mu].sub.l])[.sup.u]].

and [[SIGMA].sup.-1] = ([[sigma].sup.ij]) for i, j = 1,...,p. Mardia (1970) showed that [[beta].sub.1,p] = 0 and [[beta].sub.2,p] = p(p+2) for multivariate normal distributions. It is noted, here, that there are many types of non-normality and that Mardia's measures of multivariate skewness and kurtosis do not characterize all of them.

Empirical powers were calculated as the proportion of rejections at both the [alpha] = 0.10 and the [alpha] = 0.05 levels of significance. Empirically-generated critical values (based upon 10,000 samples) are employed for all test statistics in the interest of equitable power comparison.

Power Simulation Results

The results of our power comparison (for a = 0.10) is given in Figure 1. Each pair of histograms in the figure shows the powers of the six test statistics for both sample sizes (n = 20 or n = 50) when p = 2 (graph on left) or 6 (graph on right). When [alpha] = 0.05, the powers of all tests, naturally, are smaller compared to [alpha] = 0.10. The relative performances of the tests, however, are unaffected by the choice of [alpha] (0.10 or 0.05) for each experimental combination of n, p, and type of non-normality examined. In the interest of brevity, therefore, we have not presented the results for [alpha] = 0.05.

Careful examination of each of the graphs in Figure 1 reveals that no test statistic is uniformly most powerful over all of the configurations considered in the simulation study. Indeed, the results presented here give us reason to reiterate, albeit less vigorously, the recommendations of Andrews, Gnanadesikan, & Warner (1973:95) who advise that "... a variety of techniques with differing sensitivities to the different types of departures" should be used when testing for multivariate non-normality. In the subsections that follow, we comment on the "sensitivity" and relative performance of each of the test statistics examined over the various combinations of n, p, and type of non-normality.

Hawkins Test Statistic (HAW)

Hawkins' statistic yields excellent power characterstics for many of the distributions considered in this study. This test statistic has, for example, excellent power against both symmetric, heavy-tailed and skewed distributions regardless of the sample size or dimension. From the graphs in Figure 1, it is clear that Hawkins' test statistic very often enjoys increased statistical power when the dimension is expanded from 2 to 6, for both sample sizes. In contrast, all but one of the competing statistics tended to lose power as the dimension was increased, especially with small samples (n=20). The one exception is the PME statistic which we describe in more detail below. Hawkins' statistic does not, however, exhibit high power against symmetric light-tailed distributions and, in fact, the comparatively poor power of Hawkins' test for these types of distributions worsens with increasing dimension.

Percent Mean Difference Test Statistic (PME)

Fattorini's PME test statistic also exhibits good power against skewed and symmetric, heavy-tailed distributions. For all of these types of non-normality, however, the power of the PME statistic declined markedly when the dimension is reduced from 6 to 2. This phenomenon is especially noticeable with small samples (n=20). The practical implication, here, is that while the statistic's relative performance is good for skewed and symmetric, heavy-tailed distributions, that performance depends in large part upon the ratio, n/p, and suffers greatly when this ratio is large. In addition, the PME statistic, like Hawkins' statistic, has relatively poor power against symmetric, light-tailed distributions. There is little reason, then, to recommend the PME statistic over Hawkins' statistic unless the non-normality is likely to be in the form of very heavy-tailed distributions and the ratio, n/p, is quite small.

Paulson-Roohan-Sullo Test Statistic (PRS)

The PRS test statistic provides superior power only on those occasions where the non-normality manifests itself in the form of symmetric, light-tailed distributions. Results presented in Figures 1, however, do demonstrate that on such occasions, the PRS test statistic has markedly better relative power than all but one other statistic (see CM below). The practical implication is that this statistic is most powerful in situations where Hawkins' test and the PME statistic provide relatively poor performances. The reader is cautioned, however, that both the PRS and CM statistics have relatively poor power against skewed and symmetric, heavy-tailed distributions. Indeed, on some occasions the powers of these tests can be smaller than the actual level of the test (i.e. against symmetric, medium to heavy-tailed distributions where the ratio of the sample size to dimension is relatively small (n/p [less than or equal to] 4)). Thus, the PRS and CM test statistics biased tests for detecting multivariate non-normality.

Tsai-Koziol Test Statistic (TK)

In terms of statistical power, the TK test statistic proves to be markedly inferior to nearly all of the other test statistics except on two occasions where the ratio of sample size to dimension was large and the non-normality occurred in the form of moderately skewed distributions. There is little reason to consider this statistic in the data analytic setting or in any future research efforts.

Cramer-Von Mises Test Statistic (CM)

The CM statistic, like the PRS test statistic enjoys power advantages only on those occasions characterized by symmetric, light-tailed distributions. Even then, the power of the CM statistic is less than that of the PRS statistic. Both of these statistics are inferior to nearly all other tests examined when attempting to detect multivariate non-normality in the form of heavy-tailed or skewed distributions. Finally, there is little reason to prefer CM to PRS when choosing a test that will be sensitive to symmetric, light-tailed forms of non-normality.

Extended Malkovich and Afifi Test Statistic (EMA)

The EMA statistic enjoys adequate relative power against skewed and symmetric, heavy-tailed distributions, typically ranking third (after Hawkins and PME) in order of relative power for these experimental conditions. The EMA statistic does enjoy power advantages over all other tests, on those occasions where the nonnormal distribution is extremely skewed and the ratio of sample size to dimension is quite large. However, under these conditions all of the tests have reasonably large powers and any power differences are likely to be inconsequential. Incidently, the EMA statistic also gives a relatively poor performance against multivariate non-normality in the form of light-tailed distributions.

Conclusions

This study compares the relative powers of six test statistics (all of which are functions of the squared-radii statistic) that can be used to detect multivariate non-normality. While none of the statistics considered here was most powerful against all of the alternative distributions simulated, Hawkins' test statistic appears to have relatively good power against many of the types of multivariate non-normality considered in the present study. This is especially true of non-normality in the form of skewed or heavy-tailed distributions. Even on those occasions when Hawkins' test statistic does not yield superior power (for symmetric, light-tailed distributions), the power is fairly good in that the power differences between Hawkins' test and the "best" test statistic never exceeds about .10. It is also worth mentioning that Hawkins' test statistic is one of a few tests examined here that benefits (enjoys increased power) from increases in the dimension with no associated increases in sample size. Additionally, Hawkins' test statistic is not difficult to compute and is readily applied in the research setting.

Two cautionary notes on applying Hawkins' test statistic is in order. If the ratio of the sample size n to the dimension p is too small, the statistic H, can be negative and, therefore, useless. Simulation results indicate that this condition can usually be avoided if care is taken to insure that the ratio of sample size to dimension will be greater than or equal to 2. Also, in his paper Hawkins (1981) uses asymptotic critical values for an example application and states that the asymptotic critical values seem to be adequate. Unfortunately, this study reveals that the asymptotic critical values suggested by Hawkins (1981) yield test levels that differ considerably from the assumed test levels using asymptotic critical values. As noted in Section 2 critical values for Hawkins' statistic have been tabulated and are given in the appendix.

[FIGURE 1 OMITTED]

[FIGURE 1 OMITTED]

[FIGURE 1 OMITTED]

[FIGURE 1 OMITTED]
``` Critical
Sample Values
Dimension Size .10 .05 .025 .001 .0005

P = 2 n = 10 1.010 1.262 1.523 1.927 2.224
= 20 1.027 1.269 1.573 1.903 2.281
= 30 1.031 1.342 1.633 1.994 2.323
= 40 1.033 1.314 1.546 1.954 2.227
= 50 1.051 1.310 1.574 1.955 2.220
= 75 1.057 1.332 1.595 1.978 2.291
= 100 1.055 1.332 1.588 2.004 2.276
p = 3 n = 10 0.983 1.228 1.534 1.970 2.305
= 20 0.983 1.237 1.505 1.862 2.217
= 30 0.997 1.227 1.483 1.801 2.096
= 40 0.999 1.256 1.520 1.894 2.179
= 50 1.015 1.255 1.503 1.871 2.176
= 75 1.005 1.216 1.466 1.787 2.074
= 100 1.006 1.222 1.471 1.855 2.089
p = 4 n = 10 0.938 1.177 1.415 1.779 2.059
= 20 0.952 1.166 1.388 1.718 2.010
= 30 0.961 1.191 1.396 1.762 2.047
= 40 0.967 1.174 1.386 1.684 1.986
= 50 0.981 1.215 1.457 1.771 1.980
= 75 0.977 1.208 1.449 1.767 2.045
= 100 0.999 1.221 1.465 1.802 2.006
p = 5 n = 10 0.939 1.174 1.429 1.839 2.147
= 20 0.949 1.191 1.429 1.811 2.017
= 30 0.959 1.188 1.388 1.689 1.892
= 40 0.946 1.176 1.427 1.740 1.965
= 50 0.952 1.182 1.442 1.730 1.944
= 75 0.953 1.185 1.416 1.708 1.916
= 100 0.963 1.190 1.406 1.718 1.920
p = 6 n = 20 0.935 1.159 1.382 1.734 2.010
= 30 0.944 1.155 1.369 1.667 1.930
= 40 0.945 1.168 1.378 1.693 1.890
= 50 0.931 1.154 1.412 1.717 1.987
= 75 0.930 1.138 1.367 1.667 1.913
= 100 0.930 1.147 1.341 1.639 1.891
p = 8 n = 20 0.920 1.125 1.342 1.663 1.930
= 30 0.914 1.131 1.364 1.696 1.934
= 40 0.929 1.136 1.348 1.608 1.839
= 50 0.926 1.128 1.314 1.572 1.784
= 75 0.924 1.120 1.346 1.685 1.909
= 100 0.952 1.158 1.392 1.654 1.969
p = 10 n = 20 0.917 1.121 1.349 1.661 1.896
= 30 0.910 1.110 1.329 1.625 1.883
= 40 0.916 1.125 1.367 1.650 1.848
= 50 0.906 1.113 1.323 1.589 1.853
= 75 0.935 1.136 1.364 1.656 1.899
= 100 0.922 1.136 1.362 1.649 1.903
p = 12 n = 30 0.914 1.123 1.316 1.614 1.856
= 40 0.927 1.142 1.350 1.634 1.857
= 50 0.926 1.127 1.350 1.632 1.837
= 75 0.914 1.123 1.316 1.614 1.856
= 100 0.904 1.105 1.312 1.609 1.805
p = 15 n = 30 0.926 1.131 1.344 1.637 1.849
= 40 0.903 1.111 1.344 1.658 1.864
= 50 0.920 1.124 1.329 1.660 1.868
= 75 0.910 1.116 1.351 1.621 1.819
= 100 0.913 1.123 1.318 1.568 1.787

Appendix. Approximate upper .10, .05, .025, .001 and .0005 level
critical values for Hawkins' Test for multivariate normality of a single
distribution.
```

Literature Cited

Andrews, D. F., R. Gnanadesikan & J. L. Warner. 1973. Methods for assessing multivariate normality. P. 95-116 in Multivariate Analysis-III. (P. Krishniah, ed.), Academic Press, New York.

Booker, J. M., M. E. Johnson & R. J. Beckman. 1984. Investigation of an empirical probability measure based test for multivariate normality. ASA Proceedings of the Stat. Comp. Section 208-213.

Chhikara, R. S. & P. L. Odell. 1973. Discriminant analysis using certain normal exponential densities with emphasis on remote sensing applications. Pattern Recognition 5:259-272.

Fattorini, L. 1982. Assessing multivariate normality on beta plots. Statistica, 42:251-257.

Goodman, I. R. & S. Kotz. 1973. Multivariate [theta]-generalized normal distributions. J. Multivariate Anal. 3:204-219.

Hawkins, D. M. 1981. A new test for multivariate normality and homoscedasticity. Technometrics 23:105-110.

Healy, M. J. R. 1968. Multivariate normal plotting. Applied Statistics 17:157-161.

Jennings, L. W., D. M. Young & J. W. Seaman. 1990. An extension of the Malkovich and Afifi test for multivariate normality. Unpublished paper, Baylor University, Waco, 12 pp.

Koziol, J. A. 1982. A class of invariant procedures for assessing multivariate normality. Biometrika 69:423-427.

______. 1986. Assessing multivariate normality: a compendium. Comm. Statist. Theor. and Meth. 15:2763-2783.

Looney, S. W. 1986. A review of techniques for assessing multivariate normality. ASA Proceedings of the Stat. Comp. Section 280-285.

Malkovich J. F. & A. A. Afifi. 1973. On tests for multivariate normality. Journ. Amer. Statist. Assn. 68:176-179.

Mardia, K. V. 1970. Measures of multivariate skewness and kurtosis with applications. Biometrika 57:519-530.

Moore, D. S. & J. B. Stubblebine. 1981. Chi-square tests for multivariate normality with applications to common stock prices. Commun. Stat.-Theor. and Meth. 10:713-738.

Paulson, A. S., P. Roohan & P. Sullo. 1987. Some empirical distribution function tests for multivariate normality. J. Statist. Comput. Simul. 28:15-30.

Royston, J. P. 1983. An extension of Shapiro and Wilk's test for normality to large samples. Applied Statistics 31:115-124.

Small N. J. H. 1978. Plotting squared radii. Biometrika 65:657-658.

Tsai, K. & J. A. Koziol. 1988. A correlation type procedure for assessing multivariate normality. Commun. Statist.- Simula. 17:637-651.

Wilk, M. B., R. Gnanadesikan & M. J. Huyett. 1962. Probability plots for the gamma distribution. Technometrics 4:1-20.

Dean M. Young, Samuel L. Seaman and John W. Seaman, Jr.

Hankamer School of Business, Baylor University, Waco, Texas 76798