Printer Friendly

Autocorrelation de primer orden en series cortas: estimacion y prueba de hipotesis.

Lag-one autocorrelation in short series: Estimation and hypotheses testing.

The present study focuses on autocorrelation estimators reviewing most of them and proposing a new one. Hypothesis testing is also explored and discussed as the statistical significance of the estimates may be of interest. These topics are relevant for methodological and behavioural sciences, since they have impact on the techniques used for assessing intervention effectiveness.

It has to be taken into consideration that the previous decades' controversy on the existence of autocorrelation in behavioural data (Busk & Marascuilo, 1988; Huitema, 1985; 1988; Sharpley & Alavosius, 1988; Suen & Ary, 1987) was strongly related to the properties of the autocorrelation estimators. The evidence on the presence of serial dependence (Matyas & Greenwood, 1997; Parker, 2006) has led to exploring the effects of violating the assumptions of independence of several widely used procedures. In this relation, liberal Type I error rates have been obtained in presence of positive serial dependence for traditional analysis of variance (Scheffe, 1959) and its modifications (Toothaker, Banz, Noble, Camp, & Davis, 1983). Additionally, randomization tests--a procedure that does not explicitly assume independence (Edgington, & Onghena, 2007)--have shown to be affected by positive autocorrelation both in terms in reducing statistical power (Ferron & Ware, 1995) and, more recently, in distorting Type I error rates (Manolov & Solanas, 2009). The independence of residuals required by regression analysis (Weisberg, 1980) has resulted in proposing that after fitting the regression model, a statistically significant autocorrelation in the errors has to be eliminated prior to interpreting the regression coefficients. For instance, generalized least squares procedures such as the one proposed by Simonton (1977) and the Cochrane-Orcutt and Prais-Winsten versions require estimating the autocorrelation of the residuals. Imprecisely estimated serial dependence may lead to elevated Type I error rates when assessing intervention effectiveness in short series.

Autoregressive integrated moving average (ARIMA) modeling has also been proposed for dealing with sequentially related data (Box & Jenkins, 1970). This procedure includes an initial step of model identification including autocorrelation estimation prior to controlling it and determining the efficacy of the interventions. However, it has been shown that serial dependence distorts the performance of ARIMA in short series (Greenwood & Matyas, 1990). Unfortunately, the required amount of measurements is not frequent in applied psychological studies and, moreover, it does not ensure correct model identification (Velicer & Harrop, 1983).

Several investigations (Arnau & Bono, 2001; DeCarlo & Tryon, 1993; Huitema & McKean, 1991, 2007a, b; Matyas & Greenwood, 1991; McKean & Huitema, 1993) have carried out Monte Carlo simulation comparisons of autocorrelation estimators for different lags. These studies have shown that estimation and hypothesis testing are both problematic in short data series. Most of the estimators studied had considerable bias and were scarcely efficient for short series. As regards the asymptotic test based on Bartlett's (1946) proposal, it proved to be unacceptable. These topics have to be taken into consideration when using widespread statistical packages, as they incorporate asymptotic results in their algorithms, making the correspondence between empirical and nominal Type I error rates dubious and compromising statistical power. Therefore, basic and applied researchers should know which estimators are incorporated in the statistical software, their mathematical expression and the asymptotic approximation used for testing hypotheses.

The main objectives of the present study were: a) describe several lag-one autocorrelation estimators, presenting the expressions for their calculus; b) propose a new estimator and test it in comparison with the previously developed estimators in terms of bias and Mean Square Error (hereinafter, MSE); c) estimate the statistical power of the tests associated with the ten estimators and based on Monte Carlo sampling.

Lag-one autocorrelation estimators

The rationale behind the present review can be found in the lack of an integrative compilation of autocorrelation estimators. Their correct identification is necessary in order to avoid confusions--for instance, Cox's (1966) research seemed to centre on the conventional estimator, while in fact it was the modified one (Moran, 1970), both being presented subsequently.

Conventional estimator

Although there is a great diversity of autoregressive parameter estimators, the most frequently utilised one in social and behavioural sciences is the conventional one (as referred to by Huitema & McKean, 1991). This estimator is defined by the following expression:

[r.sub.1] = [n-1.summation over (i=1)] ([x.sub.i] - [bar.x]) ([x.sub.i+1] - [bar.x])/[n.summation over (i=1)] [([x.sub.i] - [[bar.x]).sup.2]

Its mathematical expectancy, presented in Kendall and Ord (1990), shows that its bias approximates - (1 + 4 p) / n for long series, where p is the autoregressive parameter and n is the series length. It has been demonstrated (Moran, 1948) that in independent processes--[n.sup.-1] is an exact result for [r.sub.1]'s bias without assuming the normality of the random term. As regards the variance of [r.sub.1], Bartlett's (1946) equation is commonly used, although several investigations (Huitema & McKean, 1991; Matyas & Greenwood, 1991) have shown that it does not approximate sufficiently the data obtained through Monte Carlo simulation. The lack of matching between nominal and empirical Type I error rates and the inadequate power of the asymptotic statistical test reported by previous studies may be due to the bias of the estimator and the asymmetry of the sampling distribution.

Modified estimator

Orcutt (1948) proposed the following estimator of autoregressive parameters:

[r.sup.*.sub.1] = n/n - 1 [n-1.summation over (i=1)] ([x.sub.i] - [bar.x])([x.sub.i+1] - [bar.x])/[n.summation over (i=1)] [([x.sub.i] - [bar.x]).sup.2]

Hereinafter, this estimator will be referred to as the modified estimator as it consists in a linear modification of the conventional estimator presented above. On the basis of its mathematical expectancy described by Marriott and Pope (1954) it can be seen that the bias of the modified estimator approximates -(1 + 3 [rho])/n for long series and, thus, it is not identical to the one of the conventional estimator, as it has been assumed (Huitema & McKean, 1991). The differences in independent processes bias reported by Moran (1948) and Marriott and Pope (1954) can be due to the asymmetry of the sampling distribution of the estimator. This puts in doubt the utility of the mathematical expectancy as a bias criterion (Kendall, 1954). Moran (1967) demonstrated that Var([r.sup.*.sub.1]) depends on the shape of the distribution of the random term.

Cyclic estimator

A cyclic estimator for different lag autocorrelations was investigated by Anderson (1942), although it was previously proposed by H. Hotelling (Moran, 1948). It is defined as:

[r.sup.1.sub.c] = [n.summation over (i=1)] ([x.sub.i] - [bar.x]) ([x.sub.i+1] - [bar.x])/[n.summation over (i=1)] [([x.sub.i] - [bar.x]).sup.2]

In independent processes, Anderson (1942) derived an exact distribution of the lag-one estimator for several series lengths. The distribution is highly asymmetric in short series and, according to Kendall (1954), in those cases bias should not be determined by means of procedures based on the mathematical expectancy.

Exact estimator

The expression for the exact estimator (Kendall, 1954) corresponds to the one generally used for calculating the correlation coefficient:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Mathematical-expectancy-based procedures led Kendall (1954) to the attainment of the bias of the estimator in independent processes: approximately -1/(n - 1) for long series.

C statistic

The C statistic was developed by Young (1941) in order to determine if data series are random or not. Although it has been commented and tested for assessing intervention effectiveness (Crosbie, 1989; Tryon, 1982; 1984), DeCarlo and Tryon (1993) demonstrated that the C statistic is an estimator of lag-one autocorrelation, despite the fact it does not perform as expected in short data series. The C statistic can be obtained through the following expression:

C = 1 - [n-1.summation over (i=1)] [([x.sub.i] - [x.sub.i+1]).sup.2]/2 [n.summation over [(i=1)] ([x.sub.i] - [bar.x]).sup.2]

Fuller's estimator

Fuller (1976) proposed an estimator supposed to correct the conventional estimator's bias, especially for short series. The following expression represents what we refer to as the Fuller estimator:

[r.sup.f.sub.1] = [r.sub.1] + 1/(n - 1) (1 - [r.sup.2.sub.1])

Least squares estimators

Tuan (1992) presents two least squares estimators, whose lag-one formulae can be expressed in the following manner:

Least squares estimator:

[r.sup.ls.sub.1]= [n-1.summation over (i=1)] ([x.sub.i] - [bar.x]) [x.sub.i+1] - [bar.x])/[n-1.summation over (i=1) [([x.sub.i] - [bar.x]).sup.2]

Least squares forward-backward estimator:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In the first expression, in the denominator there are only n-1 terms, as the information about the last data point is omitted. The second expression has n terms in its denominator, where the additional term arises from an averaged deviate of the initial and final data points.

Translated estimator

The [r.sub.1.sup.+] estimator was proposed by Huitema and McKean (1991):

[r.sup.+.sub.1] = [r.sub.1] + 1/n

Throughout this article it will be referred to as the translated estimator, as it performs a translation over the conventional estimator in order to correct part of the [n.sup.-1] bias. It can be demonstrated that Bias([r.sub.1.sup.+]) is approximately -(4[rho])/n.

Other autocorrelation estimators

It is practically impossible for a single investigation to assess all existing methods to estimate autocorrelation. The present study includes only the estimators which are common in behavioural sciences literature and in statistical packages, omitting, for instance, estimator r1' fitted by the bias (Arnau, 1999; Arnau & Bono, 2001). Additionally, the estimators proposed by Huitema and McKean (1994c) and the jackknife estimator (Quenouille, 1949) were not included in this study, since they are not very efficient despite the bias reduction they perform. In fact, both the jackknife and the bootstrap methods are not estimators themselves but can rather be applied to any estimator in order to reduce its bias, as has already been done (Huitema & McKean, 1994a; McKnight, McKean, & Huitema, 2000).

The maximum likelihood estimator is obtained resolving a cubic equation and assuming an independent and normal distribution of the errors. There is an expression of this estimator (Kendall & Ord, 1990) which would be more easily incorporated in statistical software, but it has not been contrasted in any other article, nor do the authors justify the simplification they propose.

The [delta]-recursive estimator

The present investigation proposes a new lag-one autocorrelation estimator, referred to as the [delta]-recursive estimator, which is defined as follows:

[r.sup.[delta].sub.1] = ([r.sub.1] + 1/n) (1 + [delta]/n) = [r.sup.+.sub.1] (1 + [delta]/n), [delta] [greater than or equal to] 0.

In the expression above, [r.sub.1] is the conventional estimator, [r.sub.1.sup.+] is the translated estimator, n corresponds to the length of the data series, and o is a constant for bias correction. This expression illustrates the close relationship between the translated and the proposed estimator, highlighting their equivalence when [delta] is equal to zero. As it can be seen, an additional correction is introduced to the translated estimator, since it is only unbiased for independent data series. Therefore, the objective of the [delta]-recursive estimator is to maintain the desirable properties of [r.sub.1.sup.+] for [[rho].sub.1] = 0 and to reduce bias for [[rho].sub.1] [not equal to] 0. This reduction of bias is achieved by means of the acceleration constant [delta]; a greater value of [delta] implies a greater reduction in bias, always keeping in mind that bias is also reduced when more measurements (n) are available. However, it has to be taken into account that Var([r.sub.1.sup.[delta]) = Var [[r.sub.1.sup.+] (1 + [delta]/n)] = [(1 + [delta]/n).sup.2] Var([r.sub.1.sup.+]) and, thus, for greater values of the constant, the proposed estimator becomes less efficient than the translated one. Therefore, the value of [delta] has to be chosen in a way to reduce the MSE and not only bias, in order the proposed estimator to be useful.

Some analytical and asymptotical results have been derived for the [delta]-recursive first order estimator:

E ([r.sup.[delta].sub.1]|[[rho].sub.1]) = (n + [delta]) (E([r.sub.1]|[[rho].sub.1])/n + 1/[n.sup.2])

Var ([r.sup.[delta].sub.1]|[[rho].sub.1]) = [((n + [delta])/n).sup.2] Var ([r.sub.1]|[[rho].sub.1])

Bias ([r.sup.[delta].sub.1]|[[rho].sub.1]) = (n + [delta]) (E([r.sub.1]|[[rho].sub.1])/n + 1/[n.sup.2]) - [[rho].sub.1]

Regarding the asymptotic distribution of the o-recursive estimator in independent processes,

[r.sup.[delta].sub.1] [right arrow] N (0; [(n - 1).sup.2] [(n + [delta]).sup.2]/[n.sup.4] (n - 1)).

Although there is a considerable matching between the theoretical and empirical sampling distributions for 50 data points, preliminary studies suggest that 100 measurement points are necessary.

Monte Carlo simulation: Mean Square Error

Method

The first experimental section of the current investigation consists in a comparison between the different lag-one autocorrelation estimators in terms of a precision indicator like MSE, which contains information about both bias and variance. This measure was chosen as it has been suggested to be appropriate for describing both biased and unbiased estimators (Spanos, 1987) and for comparing between estimators (Jenkins & Watts, 1968).

The computer-intensive technique utilised was Monte Carlo simulation, which is the optimal choice when the population distribution (i.e., the value of the autoregressive parameter and random variable distribution) is known (Noreen, 1989). Data series with ten different lengths (n = 5, 6, 7, 8, 9, 10, 15, 20, 50, and 100) were generated using a first order autoregressive model of the form [e.sub.t] = [[rho].sub.1] [e.sub.t-1] + [u.sub.t] testing nineteen levels of the lag-one autocorrelation ([[rho].sub.1]): -.9(.1).9. This model and these levels of serial dependence are the most common one in studies on autocorrelation estimation (e.g., Huitema & McKean, 1991, 1994b; Matyas & Greenwood, 1991). The error term followed three different distribution shapes with the same mean (zero) and the same standard deviation (one). Nonnormal distributions were included apart from the typically used normal distribution, due to the evidence that normal distributions may not represent sufficiently well behavioural data in some cases (Bradley, 1977; Micceri, 1989). Nonnormal distributions have already been studied in other contexts (Sawilowsky & Blair, 1992). In the present research we chose a uniform distribution in order to study the importance of kurtosis (a rectangular distribution is more platykurtic than the normal one with a [[gamma].sub.2] value of -1.2), specifying the [alpha] and [beta] (i.e., minimum and maximum) parameters to be equal to -1.7320508075688773 and 1.7320508075688773, respectively, in order to obtain the abovementioned mean and variance. A negative exponential distribution was employed to explore the effect of skewness, as this type of distribution is asymmetrical in contrast to the Gaussian distribution, with a [[gamma].sub.1] value of 2. Zero mean and unity standard deviation were achieved simulating a one-parameter distribution ([theta] = 0) with scale parameter o equal to 1 and subtracting one from the data.

For each of the 570 experimental conditions 300,000 samples were generated using Fortran 90 and the NAG libraries nag_rand_neg_exp, nag_rand_normal, and nag_rand_uniform. We verified the correct simulation process comparing the theoretical results available in the scientific literature with the estimators' mean and variance computed from simulated data.

Prior to comparing the ten estimators, we carried out a preliminary study on the optimal value of [delta] for different series lengths in terms of minimizing MSE across all levels of autocorrelation from -.9 to .9. Monte Carlo simulations involving 300,000 iterations per experimental condition suggest that the optimal [delta] depends on the errors' distribution shape. Nevertheless, as applied researchers are not likely to know the errors' distribution, we chose a [delta] that is suitable for the three distributional shapes studied. For series lengths from 5 to 9 the optimal value resulted to be 0 and, thus, the MSE values for the [delta]-recursive estimator are the same as for the translated estimator. For n = 10 the [delta] constant was set to .4, for n = 15 to .9, and for n = 20 to 1.2. For longer series, lower MSE values were obtained for [delta] ranging from .7 to 1.5. As there was practically no difference between those values for series with 50 and 100 data points, [delta] was set to 1--the only integer in that interval.

Results

The focus of this section is on intermediate levels of autocorrelation (between -.6 and .6) as those have been found to be more frequent in single-case data (Matyas & Greenwood, 1997; Parker, 2006). On the other hand, the results for shorter data series will be emphasised, as those appear to be more common in behavioural data (Huitema, 1985).

There is an exponential decay of MSE with the increase of the series length and the differences between the estimators are also reduced to minimum for n > 20, as Figure 1 shows.

[FIGURE 1 OMITTED]

The average MSE over all values of [[rho].sub.1] studied can be taken as a general indicator of the performance of the estimators. This information can also be useful for an applied researcher who has to choose an autocorrelation estimator and has no clue on the possible direction and level of serial dependence. The translated estimator shows lower MSE for series of length 5 to 9, while for n [greater than or equal to] 10 it is better to use the Fuller, the translated, or the [delta]-recursive estimators, which show practically equivalent MSE values, outperforming the remaining estimators (see Table 1). The [delta]-recursive estimator performed slightly better than any of the estimators tested for n [greater than or equal to] 15 series. It is important to remark that the conventional estimator, commonly used in the behavioural sciences, is not the most adequate one in terms of MSE.

It has to be highlighted that there is a notable divergence between the best performers for negative and positive serial dependence. As regards [[rho].sub.1] = -.3 (see Table 1), the conventional and the cyclic estimators show a better performance for n [less than or equal to] 20. For [[rho].sub.1] = .0 (see Table 2), the estimators with lower MSE are the translated, Fuller, and the [delta]-recursive. For positive values of the autoregressive parameter (Table 2), the same three estimators and the C statistic excel.

When focusing on bias, as one of the components of MSE, the conventional and the cyclic estimators prove to be less biased for low negative autocorrelation (Table 3), while the translated, the C statistic, and the [delta]-recursive estimators are unbiased for independent data series (Table 4). Table 4 also contains the information about some positive values of the autoregressive parameter. For [[rho].sub.1] = .3, the bias of the Fuller, the translated, the C statistic, and the [delta]-recursive estimators is half the bias of the remaining estimators for 5 [less than or equal to] n [less than or equal to] 10. For higher positive serial dependence, the aforementioned four estimators are once again the less biased ones. The proposed [delta]-recursive estimator is the less biased one for positive autocorrelation and series with 10 and 15 data points, cases in which [delta] was set to .4 and .9, respectively.

As regards the relevance of errors' distribution, Figure 2 illustrates the general finding that MSE tends to be somewhat smaller when the errors follow a negative (i.e., positive asymmetric) exponential distribution and greater when they are uniformly distributed.

[FIGURE 2 OMITTED]

Monte Carlo sampling: Statistical power

Method

In a first stage the 1% and 5% cut-off points were estimated for each estimator sampling distribution and each series length. In contrast with previous studies (e.g., Huitema & McKean, 1994b; 2000), Monte Carlo methods based on 300,000 iterations were used to estimate the cut-off points, as an alternative to asymptotic tests, as those do not seem to be appropriate for short series (Huitema & McKean, 1991). That is, the power estimates presented here are not founded on a test statistic based on large-sample properties. Instead, the statistical tests associated with the autocorrelation estimators were based on Monte Carlo sampling, which is a suitable approach when the sampling distribution of the test statistic is not known (Noreen, 1989). The analysis was based on nondirectional null hypotheses ([H.sub.0]: [[rho].sub.1] = .0) and, thus, the values corresponding to quantiles .005 and .995 for 1% alpha and quantiles .025 and .975 for 5% alpha were identified. Power was estimated as the proportion of values smaller than the lower bound or greater than the upper bound out of 300,000 iterations per parameter level.

Results

The differences between the best and worst performers in terms of power are generally small, as can be seen comparing the first and the second column of Tables 5, 6, and 7. The proposed estimator performs approximately as the best performers in each condition. In general, sensitivity is rather low in short series and unless the applied researcher has at least 20 measurement times, high degrees of [absolute value of [rho]] may not be reliably detected as statistically significant (Table 7).

If a 1% alpha level is chosen, Type II errors would be excessively frequent for series shorter than 50 observations. Greater power was found for series with exponentially distributed errors--exactly the case for which MSE was lower. Correspondingly, uniform errors' distribution was associated with less sensitivity.

Discussion

The present investigation extends previous research on autocorrelation estimators comparing ten estimators (including a new bias-reducing proposal) in terms of two types of statistical error, bias and variance, summarised as mean square error. Current results concur with previous findings on the existence of bias of autocorrelation estimators applied to short data series, especially in the case of [[rho].sub.1] > 0, as reported by Matyas and Greenwood (1991). It was also replicated that the translated estimator is less bias for positive autocorrelation and more biased for negative one than the conventional estimator (Huitema & McKean, 1991). In general, all estimators studied show lower MSE for negative values of the autoregressive parameter. However, there is not a single optimal estimator for all levels of autocorrelation and all series lengths, as the comparison in terms of MSE values and bias suggests. Bias is present in independent data and gets more pronounced in short autocorrelated series. Out of all of the estimators tested only the [delta]-recursive, the translated, and the C statistic are not biased for independent series. The magnitude of the bias is heterogeneous among the estimators and, as expected, tends to decrease for longer series. The presence of negative bias when [[rho].sub.1] > 0 implies that an existing positive serial dependence will be underestimated.

The positive bias in conditions with [[rho].sub.1] < 0 also entails that the autocorrelation estimate will be closer to zero than it should be. In both cases, it will be harder for the estimates to reach statistical significance when testing [H.sub.0]: [[rho].sub.1] = 0.

The variance of the estimators is also dissimilar and the efficiency of the estimators depends on the autoregressive parameter and series length. Therefore, there is not a single uniform minimum variance unbiased estimator among the ones assessed in the present study. The proposed [delta]-recursive estimator equals or improves the performance of the other estimators (in terms of MSE and bias) when n [greater than or equal to] 10 in the cases of positive autocorrelation and considering the overall performance across all [[rho].sub.1]. Therefore, it can be considered a viable alternative whenever the sign of the autoregressive parameter is not known or is supposed to be positive. For series with less than ten measurement times, the Fuller and the translated estimators are the most adequate ones if the applied researcher assumes that [[rho].sub.1] [greater than or equal to] 0 or has no information about the possible direction of the serial dependence. For [[rho].sub.1] < 0 the conventional estimator is the one showing best results for all series lengths studied.

The present study also estimates power using tests based on Monte Carlo sampling rather than on asymptotic formulae, as has been previously done. The estimates obtained here are somewhat higher than the ones reported for Bartlett's test (Arnau & Bono, 2001; Huitema & McKean, 1991) and somewhat lower than the ones associated with the test recommended by Huitema and McKean (1991). Regarding Moran's (1948) approximation for the conventional estimator, the Monte Carlo sampling tests are more sensitive for [[rho].sub.1] > 0 and less sensitive for [[rho].sub.1] < 0. For the translated estimator, power estimates are similar for Monte Carlo sampling and Moran's approximation (Arnau & Bono, 2001). In general, present and past findings coincide in the low sensitivity in short data series. The difference in power between the tests associated with the estimators is only slight.

Combining the findings of previous research and the present investigation it seems that empirical studies on real behavioural measurements (e.g., the surveys by Busk and Marascuilo, 1988; Huitema, 1985; and Parker, 2006) are not likely to resolve unequivocally the question of the existence and statistical significance of serial dependence in single-case data. The reason is the high statistical error of the estimators applied to short data series and the lack of power of the test associated with those estimators. Only for series containing 50 or 100 data points would the evidence have any meaning.

For applied researchers the lack of precision and sensitivity in estimating autocorrelation implies uncertainty about the degree of serial dependence that may be present in the behavioural data collected. It has been remarked that low estimates of serial dependence do not guarantee the adequacy of applying statistical techniques based on the General Linear Model to assess intervention effectiveness (Ferron, 2002). Therefore, clinical, educational, and social psychologists need to assess intervention effectiveness by means of procedures with appropriate Type I and Type II error rates in presence of autocorrelation.

A specific contribution of the present study to methodological research is the comparison between errors' distribution shapes. The results indicate that generating data with errors following a normal, a rectangular or a highly asymmetric distribution does not influence critically the MSE and power estimates. Hence, the findings of studies based solely on normally distributed errors may not be limited to the conditions actually simulated.

A limitation of the present study consists in the fact that only an AR(1) model was employed to generate data. As it has been pointed out (Harrop & Velicer, 1985), there are other models that may be used to represent behavioural data. Future studies may be based, for instance, on moving average models to extend the evidence on the performance of autocorrelation estimators. Additionally, in view of the presence of bias in each successive estimator proposed by different authors, a bias reducing technique may be useful. The bootstrap adjustment of bias has been shown to be effective correcting the positive bias for [[rho].sub.1] < 0 and the negative one for [[rho].sub.1] > 0 and reducing the MSE, according to the data presented by McKnight et al. (2000) for series with n [greater than or equal to] 20, in contrast to jackknife methods which increase the error variance (Huitema & McKean, 1994a). We consider that bootstrap ought to be applied to the estimators that seem to have a more adequate performance in terms of MSE - the Fuller, the translated, and the [delta]-recursive estimators when positive serial dependence is assumed or when the sign of the autocorrelation is unknown, and the conventional estimator for negative one. Therefore, it is necessary to investigate the degree to which the bootstrap improves those estimators when few measurements are available, as is the case in applied psychological studies. Another possible application of the bootstrap is to construct confidence intervals about the autocorrelation estimates, since those have shown appropriate coverage (McKnight et al., 2000), and use them to make statistical decisions. Bootstrap has the advantage of allowing asymmetric confidence intervals which correspond to the skewed distributions of the estimators for short data series. In this case, the power of the tests based on bootstrap confidence intervals has to be compared to the sensitivity of the test constructed using Monte Carlo sampling, since Bartlett's (1946) and Moran's (1948) approximations for hypothesis testing seem inappropriate for short data series (Arnau & Bono, 2001; Huitema & McKean, 1991; Matyas & Greenwood, 1991).

REFERENCES

Anderson, R. L. (1942). Distribution of the serial correlation coefficient. The Annals of Mathematical Statistics, 13, 1-13.

Arnau, J. (1999). Reduccion del sesgo en la estimacion de la autocorrelacion en series temporales cortas. Metodologia de las Ciencias del Comportamiento, 1, 25-37.

Arnau, J., & Bono, R. (2001). Autocorrelation and bias in short time series: An alternative estimator. Quality & Quantity, 35, 365-387.

Bartlett, M. S. (1946). On the theoretical specification and sampling properties of autocorrelated time-series. Journal of the Royal Statistical Society, 8, 27-41.

Box, G. E. P., & Jenkins, G. M. (1970). Time series analysis, forecasting and control. San Francisco: Holden-Day.

Bradley, J. V. (1977). A common situation conducive to bizarre distribution shapes. American Statistician, 31, 147-150.

Busk, P. L., & Marascuilo, L. A. (1988). Autocorrelation in single-subject research: A counterargument to the myth of no autocorrelation. Behavioral Assessment, 10, 229-242.

Cox, D. R. (1966). The null distribution of the first serial correlation coefficient. Biometrika, 43, 623-626.

Crosbie, J. (1989). The inapproprieteness of the C Statistic for assessing stability or treatment effects with single-subject data. Behavior Assessment, 11, 315-325.

DeCarlo, L. T., & Tryon, W. W. (1993). Estimating and testing correlation with small samples: A comparison of the C-statistic to modified estimator. Behaviour Research and Therapy, 31, 781-788.

Edgington, E. S., & Onghena, P. (2007). Randomization tests (4th ed.). London: Chapman & Hall/CRC.

Ferron, J. (2002). Reconsidering the use of the general linear model with single-case data. Behavior Research Methods, Instruments, & Computers, 34, 324-331.

Ferron, J., & Ware, W. (1995). Analyzing single-case data: The power of randomization tests. The Journal of Experimental Education, 63, 167-178.

Fuller, W. A. (1976). Introduction to statistical time series. New York: John Wiley & Sons.

Greenwood, K. M., & Matyas, T. A. (1990). Problems with application of interrupted time series analysis for brief single-subject data. Behavioral Assessment, 12, 355-370.

Harrop, J. W., & Velicer, W. F. (1985). A comparison of alternative approaches to the analysis of interrupted time-series. Multivariate Behavioral Research, 20, 27-44.

Huitema, B. E. (1985). Autocorrelation in applied behavior analysis: A myth. Behavior Assessment, 7, 107-118.

Huitema, B. E. (1988). Autocorrelation: 10 years of confusion. Behavioral Assessment, 10, 253-294.

Huitema, B. E., & McKean, J. W. (1991). Autocorrelation estimation and inference with small samples. Psychological Bulletin, 110, 291-304.

Huitema, B. E., & McKean, J. W. (1994a). Reduced bias autocorrelation estimation: Three jackknife methods. Educational and Psychological Measurement, 54, 654-665.

Huitema, B. E., & McKean, J. W. (1994b). Tests of [H.sub.0]: [[rho].sub.1] = 0 for autocorrelation estimators [r.sub.F1] and [r.sub.F2]. Perceptual and Motor Skills, 78, 331 -336.

Huitema, B. E., & McKean, J. W. (1994c). Two reduced-bias autocorrelation estimators: rF1 and rF2. Perceptual and Motor Skills, 78, 323-330.

Huitema, B. E., & McKean, J. W. (2000). A simple and powerful test for autocorrelated errors in OLS intervention models. Psychological Reports, 87, 3-20.

Huitema, B. E., & McKean, J. W. (2007a). An improved portmanteau test for autocorrelated errors in interrupted time-series regression models. Behavior Research Methods, 39, 343-349.

Huitema, B. E., & McKean, J. W. (2007b). Identifying autocorrelation generated by various error processes in interrupted time-series progression designs: A comparison of AR1 and portmanteau tests. Educational and Psychological Measurement, 67, 447-459.

Jenkins, G. M., & Watts, D. G. (1968). Spectral analysis and its applications. New York: Holden-Day.

Kendall, M. G. (1954). Note on bias in the estimation of estimation of autocorrelation. Biometrika, 41, 403-404.

Kendall, M. G., & Ord, J. K. (1990). Time series. Sevenoaks: Edward Arnold.

Manolov, R., & Solanas, A. (2009). Problems of the randomization test for AB designs. Psicologica, 30, 137-154.

Marriott, F. H. C., & Pope, A. (1954). Bias in the estimation of autocorrelation. Biometrika, 41, 390-402.

Matyas, T. A., & Greenwood, K. M. (1991). Problems in the estimation of autocorrelation in brief time series and some implications for behavioral data. Behavior Assessment, 13, 137-157.

Matyas, T. A., & Greenwood, K. M. (1997). Serial dependency in single-case time series. In R. D. Franklin, D. B. Allison, & B. S. Gorman (Eds.), Design and analysis of single-case research (pp. 215-243). Mahwah, NJ: Lawrence Erlbaum.

McKean, J. W., & Huitema, B. E. (1993). Small sample properties of the Spearman autocorrelation estimator. Perceptual & Motor Skills, 76, 384-386.

McKnight, S. D., McKean, J. W., & Huitema, B. E. (2000). A double bootstrap method to analyze linear models with autoregressive error terms. Psychological Methods, 3, 87-101.

Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105, 156-166.

Moran, P. A. P. (1948). Some theorems on time series. II. The significance of the serial correlation coefficient. Biometrika, 35, 255-267.

Moran, P. A. P. (1967). Testing for serial correlation with exponentially distributed variates. Biometrika, 54, 395-401.

Moran, P. A. P. (1970). A note on serial correlation coefficients. Biometrika, 57, 670-673.

Noreen, E. R. (1989). Computer-intensive methods for testing hypotheses: An introduction. New York: John Wiley and Sons.

Orcutt, G. H. (1948). A study of the autoregressive nature of the time series used for Tinbergen's model of the economic system of the United States, 1919-1932. Journal of the Royal Statistical Society (Series B), 10, 1-53.

Parker, R. I. (2006). Increased reliability for single-case research results: Is bootstrap the answer? Behavior Therapy, 37, 326-338.

Quenouille, M. (1949). Approximate test of correlation in time series. Journal of the Royal Statistical Society (Series B), 11, 18-84.

Sawilowsky, S. S., & Blair, R. C. (1992). A more realistic look at the robustness and Type II error properties of the t test to departures from population normality. Psychological Bulletin, 111, 352-360.

Scheffe, H. (1959). The analysis of variance. New York: Wiley.

Sharpley, C. F., & Alavosius, M. P. (1988). Autocorrelation in behavior data: An alternative perspective. Behavior Assessment, 10, 243-251.

Simonton, D. K. (1977). Cross-sectional time-series experiments: Some suggested statistical analyses. Psychological Bulletin, 84, 489-502.

Spanos, A. (1987). Statistical foundations of econometrics modeling. Cambridge: Cambridge University Press.

Suen, H. K., & Ary, D. (1987). Autocorrelation in behavior analysis: Myth or reality? Behavioral Assessment, 9, 150-130.

Toothaker, L. E., Banz, M., Noble, C., Camp, J., & Davis, D. (1983). N = 1 designs: The failure of ANOVA-based tests. Journal of Educational Statistics, 4, 289-309.

Tryon, W. W. (1982). A simplified time-series analysis for evaluating treatment interventions. Journal of Applied Behavior Analysis, 15, 423-429.

Tryon, W. W. (1984). "A simplified time-series analysis for evaluating treatment interventions": A rejoinder to Blumberg. Journal of Applied Behavior Analysis, 17, 543-544.

Tuan, D. P. (1992). Approximate distribution of parameter estimators for first-order autoregressive models. Journal of Time Series Analysis, 13, 147-170.

Velicer, W. F., & Harrop, J. W. (1983). The reliability and accuracy of the time-series model identification. Evaluation Review, 7, 551-560.

Weisberg, S. (1980). Applied linear regression. New York: John Wiley & Sons.

Young, L. C. (1941). On the randomness in ordered sequences. The Annals of Mathematical Statistics, 12, 293-300.

(Manuscript received: 8 May 2009; accepted: 25 September 2009)

Antonio Solanas * (1,3), Rumen Manolov (1), and Vicenta Sierra (2)

(1) University of Barcelona, Spain;

(2) ESADE-Ramon Llull University, Barcelona, Spain;

(3) Institute for Research in Brain, Cognition, and Behavior (IR3C)

* This research was supported by the Comissionat per a Universitats i Recerca del Departament d'Innovacio, Universitats i Empresa of the Generalitat de Catalunya and the European Social Fund. Correspondence concerning this article should be addressed to Antonio Solanas, Departament de Metodologia de les Ciencies del Comportament, Facultat de Psicologia, Universitat de Barcelona, Passeig de la Vall d'Hebron, 171, 08035-Barcelona, Spain. Phone number: +34933125076. Electronic mail may be sent to Antonio Solanas at antonio.solanas@ub.edu
Table 1. Mean square error of the ten lag-one autocorrelation
estimators in series with different lengths. Average: bias
averaged across -.9 [less than or equal to] [[rho].sub.1] [less
than or equal to] .9.

                                      SERIES LENGTH
                  Auto-
Estimators     correlation      5        10       15

                                     Exponential errors

Conventional     Average       .257     .116     .071
                   -.6.        .102     .067     .047
                   -.3.        .081     .061     .047
Modified         Average       .297     .119     .070
                   -.6.        .122     .072     .049
                   -.3.        .135     .077     .054
Cyclic           Average       .308     .127     .074
                   -.6.        .092     .069     .048
                   -.3.        .094     .066     .049
Exact            Average       .335     .119     .069
                   -.6.        .151     .070     .047
                   -.3.        .188     .080     .054
C                Average       .232     .112     .069
statistic          -.6.        .238     .116     .072
                   -.3.        .153     .089     .062
Fuller           Average       .211     .103     .065
                   -.6.        .221     .100     .062
                   -.3.        .138     .077     .054
Least            Average       .318     .122     .070
Squares            -.6.        .130     .073     .048
                   -.3.        .149     .081     .055
Forward-         Average       .288     .117     .068
Backward           -6.         .109     .068     .046
                   -.3.        .123     .074     .052
Translated       Average       .209     .103     .065
                   -.6.        .205     .098     .062
                   -.3.        .114     .072     .051
[delta]-         Average       .209     .103     .064
recursive          -.6.        .205     .097     .060
                   -.3.        .114     .075     .055

                                      SERIES LENGTH

Estimators     Autocorrelation  5        10       15

                                     Normal errors

Conventional     Average       .270     .127     .077
                   -.6.        .107     .069     .047
                   -.3.        .086     .066     .049
Modified         Average       .315     .131     .077
                   -.6.        .127     .073     .049
                   -.3.        .143     .082     .057
Cyclic           Average       .322     .138     .071
                   -.6.        .103     .072     .049
                   -.3.        .103     .073     .052
Exact            Average       .345     .129     .075
                   -.6.        .146     .069     .046
                   -.3.        .182     .083     .056
C                Average       .239     .117     .076
statistic          -.6.        .237     .112     .068
                   -.3.        .154     .089     .061
Fuller           Average       .225     .113     .070
                   -.6.        .231     .103     .063
                   -.3.        .147     .084     .058
Least            Average       .334     .131     .072
Squares            -.6.        .139     .074     .048
                   -.3.        .159     .084     .057
Forward-         Average       .306     .128     .075
Backward           -6.         .114     .068     .046
                   -.3.        .130     .079     .055
Translated       Average       .221     .113     .081
                   -.6.        .214     .101     .062
                   -.3.        .121     .077     .055
[delta]-         Average       .221     .113     .071
recursive          -.6.        .214     .099     .060
                   -.3.        .121     .081     .059

                                     SERIES LENGTH

Estimators     Autocorrelation  5        10       15

                                     Uniform errors

Conventional     Average       .281     .133     .080
                   -.6.        .113     .073     .049
                   -.3.        .092     .071     .052
Modified         Average       .331     .137     .080
                   -.6.        .134     .077     .051
                   -.3.        .152     .088     .060
Cyclic           Average       .334     .145     .085
                   -.6.        .111     .077     .051
                   -.3.        .111     .078     .056
Exact            Average       .356     .135     .079
                   -.6.        .154     .074     .049
                   -.3.        .189     .088     .060
C                Average       .250     .122     .075
statistic          -.6.        .245     .114     .069
                   -.3.        .158     .092     .062
Fuller           Average       .237     .118     .074
                   -.6.        .239     .107     .065
                   -.3.        .154     .089     .061
Least            Average       .345     .137     .079
Squares            -.6.        .143     .078     .050
                   -.3.        .165     .089     .060
Forward-         Average       .320     .135     .079
Backward           -6.         .121     .073     .049
                   -.3.        .139     .085     .059
Translated       Average       .231     .119     .074
                   -.6.        .221     .104     .063
                   -.3.        .127     .082     .058
[delta]-         Average       .231     .119     .073
recursive          -.6.        .221     .105     .065
                   -.3.        .127     .087     .063

Table 2. Mean square error of the ten different lag-one autocorrelation
estimators in series with different lengths.

                                    SERIES LENGTH

                  Auto-
Estimators     correlation      5        10       15

                                  Exponential errors

Conventional         0         .123     .072     .052
                    .3         .242     .106     .066
                    .6         .464     .177     .097
Modified             0         .192     .089     .059
                    .3         .310     .116     .069
                    .6         .517     .173     .092
Cyclic               0         .157     .081     .055
                    .3         .301     .117     .070
                    .6         .571     .192     .102
Exact                0         .255     .095     .061
                    .3         .371     .122     .071
                    .6         .556     .174     .091
C                    0         .122     .077     .055
statistic           .3         .156     .083     .055
                    .6         .269     .118     .070
Fuller               0         .100     .064     .048
                    .3         .124     .071     .049
                    .6         .242     .115     .070
Least                0         .215     .096     .062
Squares             .3         .344     .126     .072
                    .6         .554     .180     .093
Forward-             0         .182     .088     .059
Backward            .3         .303     .117     .069
                    .6         .511     .173     .092
Translated           0         .083     .062     .047
                    .3         .123     .074     .051
                    .6         .258     .119     .071
[delta]-             0         .083     .067     .053
recursive           .3         .123     .078     .055
                    .6         .258     .118     .069

                                    SERIES LENGTH

                  Auto-
Estimators     correlation      5        10       15

                                      Normal errors

Conventional         0         .130     .081     .058
                    .3         .253     .119     .076
                    .6         .485     .195     .108
Modified             0         .203     .100     .067
                    .3         .325     .132     .081
                    .6         .545     .194     .104
Cyclic               0         .167     .091     .063
                    .3         .315     .132     .081
                    .6         .588     .212     .114
Exact                0         .258     .104     .068
                    .3         .381     .136     .081
                    .6         .578     .193     .103
C                    0         .125     .081     .058
statistic           .3         .160     .091     .062
                    .6         .283     .132     .079
Fuller               0         .109     .074     .055
                    .3         .135     .084     .059
                    .6         .259     .131     .080
Least                0         .226     .104     .068
Squares             .3         .355     .137     .082
                    .6         .573     .196     .104
Forward-             0         .195     .099     .066
Backward            .3         .321     .132     .081
                    .6         .540     .193     .103
Translated           0         .090     .071     .054
                    .3         .132     .086     .061
                    .6         .274     .135     .081
[delta]-             0         .090     .077     .060
recursive           .3         .132     .091     .066
                    .6         .274     .134     .079

                                    SERIES LENGTH

                  Auto-
Estimators     correlation      5        10       15

                                     Uniform errors

Conventional         0         .137     .087     .062
                    .3         .264     .125     .079

                    .6         .506     .203     .112
Modified             0         .214     .107     .071
                    .3         .342     .140     .085
                    .6         .573     .202     .108
Cyclic               0         .176     .097     .066
                    .3         .329     .140     .085
                    .6         .609     .220     .118
Exact                0         .267     .110     .071
                    .3         .393     .143     .085
                    .6         .595     .201     .107
C                    0         .128     .083     .060
statistic           .3         .167     .095     .064
                    .6         .303     .139     .082
Fuller               0         .116     .080     .058
                    .3         .146     .090     .062
                    .6         .278     .138     .083
Least                0         .234     .109     .071
Squares             .3         .369     .143     .085
                    .6         .594     .204     .108
Forward-             0         .206     .106     .070
Backward            .3         .337     .140     .085
                    .6         .563     .201     .107
Translated           0         .097     .077     .057
                    .3         .141     .092     .064
                    .6         .290     .142     .084
[delta]-             0         .097     .083     .064
recursive           .3         .141     .097     .069
                    .6         .290     .141     .083

Table 3. Bias of the ten lag-one autocorrelation estimators in series
with different lengths. Average: bias averaged across -.9 [less than
or equal to] [[rho].sub.1] [less than or equal to] .9.

                                     SERIES LENGTH
                  Auto-
 Estimators    correlation      5        10       15

                                  Exponential errors

Conventional     Average      -.220    -.115    -.077
                   -.6.        .158     .103     .075
                   -.3.       -.018     .001     .003
Modified         Average      -.275    -.128    -.082
                   -.6.        .048     .048     .037
                   -.3.       -.098    -.032    -.018
Cyclic           Average      -.277    -.129    -.082
                   -.6.        .124     .093     .071
                   -.3.       -.063    -.009    -.001
Exact            Average      -.245    -.119    -.077
                   -.6.       -.043     .049     .040
                   -.3.       -.102    -.035    -.019
C                Average      -.014    -.012    -.010
statistic          -.6.        .339     .193     .133
                   -.3.        .172     .096     .067
Fuller           Average      -.012    -.025    -.021
                   -.6.        .340     .180     .123
                   -.3.        .186     .095     .065
Least            Average      -.276    -.127    -.081
Squares            -.6.        .045     .044     .035
                   -.3.       -.095    -.032    -.018
Forward-         Average      -.255    -.120    -.078
Backward           -.6.        .076     .059     .044
                   -.3.       -.081    -.027    -.016
Translated       Average       .020    -.015    -.011
                   -.6.        .358     .203     .141
                   -.3.        .182     .101     .070
[delta]-         Average      -.020    -.016    -.011
recursive          -.6.        .358     .187     .114
                   -.3.        .182     .093     .056

                                    SERIES LENGTH
                  Auto-
 Estimators    correlation      5        10       15

                                    Normal errors

Conventional     Average      -.224    -.118    -.079
                   -.6.        .166     .108     .079
                   -.3.       -.015     .006     .007
Modified         Average      -.279    -.131    -.084
                   -.6.        .058     .054     .041
                   -.3.       -.093    -.027    -.014
Cyclic           Average      -.277     -131    -.084
                   -.6.        .134     .099     .075
                   -.3.       -.057    -.004     .002
Exact            Average      -.257     -124    -.080
                   -.6.        .344     .057     .045
                   -.3.       -.093    -.027    -.014
C                Average      -.020    -.016    -.012
statistic          -.6.        .350     .196     .137
                   -.3.        .174     .100     .070
Fuller           Average      -.016    -.028    -.022
                   -.6.        .350     .186     .128
                   -.3.        .189     .100     .069
Least            Average      -.281    -.129    -.082
Squares            -.6.        .047     .049     .039
                   -.3.       -.094    -.027    -.014
Forward-         Average      -.262    -.128    -.081
Backward           -.6.        .085     .065     .048
                   -.3.       -.076    -.023    -.012
Translated       Average      -.024    -.018    -.012
                   -.6.        .366     .193     .145
                   -.3.        .185     .106     .073
[delta]-         Average      -.024    -.019    -.013
recursive          -.6.        .366     .208     .118
                   -.3.        .185     .098     .060

                                    SERIES LENGTH
                  Auto-
 Estimators    correlation      5        10       15

                                   Uniform errors

Conventional     Average      -.225    -.118    -.079
                   -.6.        .172     .112     .082
                   -.3.       -.012     .008     .008
Modified         Average      -.281    -.131    -.085
                   -.6.        .065     .058     .045
                   -.3.       -.090    -.024    -.013
Cyclic           Average      -.278    -.131    -.084
                   -.6.        .139     .103     .078
                   -.3.       -.055    -.001     .004
Exact            Average      -.256    -.124    -.080
                   -.6.        .063     .062     .049
                   -.3.       -.089    -.023    -.012
C                Average      -.023    -.017    -.013
statistic          -.6.        .350     .200     .139
                   -.3.        .177     .103     .072
Fuller           Average      -.018    -.028    -.023
                   -.6.        .355     .190     .131
                   -.3.        .191     .102     .070
Least            Average      -.281    -.129    -.082
Squares            -.6.        .056     .054     .043
                   -.3.       -.092    -.025    -.013
Forward-         Average      -.264    -.125    -.081
Backward           -.6.        .091     .069     .052
                   -.3.       -.074    -.020    -.010
Translated       Average      -.025    -.018    -.012
                   -.6.        .372     .197     .148
                   -.3.        .188     .108     .075
[delta]-         Average      -.025    -.019    -.013
recursive          -.6.        .372     .212     .121
                   -.3.        .188     .101     .061

Table 4. Bias of the ten lag-one autocorrelation estimators
in series with different lengths.

                                   SERIES LENGTH
                  Auto-
 Estimators    correlation      5      10      15

                                Exponential errors

Conventional         0        -.200   -.100   -.066
                    .3        -.398   -.208   -.140
                    .6        -.615   -.338   -.229
Modified             0        -.250   -.111   -.071
                    .3        -.422   -.198   -.128
                    .6        -.618   -.309   -.203
Cyclic               0        -.250   -.111   -.071
                    .3        -.458   -.220   -.145
                    .6        -.697   -.353   -.234
Exact                0        -.235   -.109   -.071
                    .3        -.376   -.189   -.125
                    .6        -.545   -.292   -.196
C                    0         .000    .000    .000
                    .3        -.186   -.105   -.072
                    .6        -.383   -.226   -.158
Fuller               0         .019    .003    .001
                    .3        -.171   -.105   -.073
                    .6        -.386   -.242   -.171
Least                0        -.025   -.111   -.071
Squares             .3        -.424   -.197   -.128
                    .6        -.613   -.300   -.196
Forward-             0        -.238   -.109   -.070
Backward            .3        -.410   -.196   -.128
                    .6        -.602   -.304   -.201
Translated           0         .000    .000    .000
                    .3        -.198   -.108   -.073
                    .6        -.415   -.238   -.163
[delta]-             0         .000    .000    .000
recursive           .3        -.198   -.101   -.059
                    .6        -.415   -.224   -.136

                                   SERIES LENGTH
                  Auto-
 Estimators    correlation      5      10      15

                                   Normal errors

Conventional         0        -.200   -.100   -.067
                    .3        -.402   -.213   -.145
                    .6        -.629   -.351   -.238
Modified             0        -.250   -.111   -.071
                    .3        -.428   -.204   -.134
                    .6        -.637   -.324   -.212
Cyclic               0        -.250   -.111   -.072
                    .3        -.463   -.226   -.150
                    .6        -.703   -.365   -.242
Exact                0        -.238   -.110   -.071
                    .3        -.395   -.199   -.132
                    .6        -.581   -.311   -.207
C                    0         .000    .000    .000
                    .3        -.190   -.109   -.077
                    .6        -.402   -.240   -.167
Fuller               0         .017    .002    .001
                    .3        -.178   -.111   -.079
                    .6        -.402   -.255   -.179
Least                0        -.251   -.071   -.071
Squares             .3        -.424   -.201   -.133
                    .6        -.625   -.311   -.204
Forward-             0         .240   -.071   -.071
Backward            .3        -.419   -.203   -.134
                    .6        -.625   -.320   -.211
Translated           0         .000    .000    .000
                    .3        -.202   -.113   -.078
                    .6        -.429   -.251   -.171
[delta]-             0         .000    .000    .000
recursive           .3        -.202   -.106   -.065
                    .6        -.429   -.237   -.145

                                   SERIES LENGTH
                  Auto-
 Estimators    correlation      5      10      15

                                   Uniform errors

Conventional         0        -.200   -.100   -.066
                    .3        -.408   -.217   -.147
                    .6        -.640   -.356   -.241
Modified             0        -.250   -.111   -.070
                    .3        -.436   -.208   -.136
                    .6        -.649   -.329   -.215
Cyclic               0        -.250   -.111   -.070
                    .3        -.468   -.229   -.152
                    .6        -.711   -.369   -.245
Exact                0        -.239   -.111   -.070
                    .3        -.405   -.204   -.135
                    .6        -.590   -.317   -.211
C                    0         .000    .000    .001
                    .3        -.197   -.113   -.079
                    .6        -.415   -.245   -.171
Fuller               0         .016    .001    .001
                    .3        -.186   -.115   -.081
                    .6        -.414   -.260   -.183
Least                0        -.250   -.111   -.071
Squares             .3        -.431   -.205   -.135
                    .6        -.635   -.316   -.208
Forward-             0        -.241   -.111   -.070
Backward            .3        -.427   -.208   -.136
                    .6        -.636   -.326   -.215
Translated           0         .000    .000    .001
                    .3        -.208   -.117   -.080
                    .6        -.440   -.256   -.174
[delta]-             0         .000    .000    .001
recursive           .3        -.208   -.110   -.067
                    .6        -.440   -.243   -.149

Table 5. Power estimates for 5% alpha for five-measurement series
and several values of the autoregressive parameter. The first
column represents the most sensitive test for each error
distribution; the second contains the less sensitive one; and
third focuses on the proposed estimator.

                              Exponential error

                                                  [delta]
[[rho].sub.1]   C statistic        Circular    -recursive

-.6                   .1430           .0954         .1348
-.3                   .0674           .0599         .0634
.0                    .0504           .0501         .0501
.3                    .0643           .0559         .0616
.6                    .1224           .0683         .0976

                               Normal error

                                                  [delta]
[[rho].sub.1]     FBackward        Circular    -recursive

-.6                   .1358           .0859         .1340
-.3                   .0656           .0570         .0658
.0                    .0507           .0503         .0502
.3                    .0630           .0549         .0628
.6                    .0934           .0624         .0910

                              Uniform error

                                                  [delta]
[[rho].sub.1]   C statistic        Circular    -recursive

-.6                   .1175           .0799         .1180
-.3                   .0640           .0576         .0626
.0                    .0499           .0488         .0504
.3                    .0598           .0542         .0603
.6                    .0874           .0626         .0788

Table 6. Power estimates for 5% alpha for ten-measurement series
and several values of the autoregressive parameter. The first
column represents the most sensitive test for each error
distribution; the second contains the less sensitive one; and
third focuses on the proposed estimator.

                             Exponential error

                                                [delta]-
[[rho].sub.1]   Translated     C statistic     recursive

-.6                  .4876           .4463         .4877
-.3                  .1528           .1537         .1529
.0                   .0502           .0499         .0504
.3                   .1076           .0962         .1081
.6                   .2991           .2643         .3000

                              Normal error

                                                [delta]-
[[rho].sub.1]    FBackward        Circular     recursive

-.6                  .3803           .3462         .3799
-.3                  .1153           .1087         .1177
.0                   .0497           .0496         .0497
.3                   .1124           .1069         .1126
.6                   .2980           .2684         .2913

                             Uniform error

                                                [delta]-
[[rho].sub.1]     Least Sq        Circular     recursive

-.6                  .3477           .3160         .3521
-.3                  .1085           .1042         .1128
.0                   .0502           .0502         .0501
.3                   .1063           .0986         .1037
.6                   .2706           .2346         .2574

Table 7. Power estimates for 5% alpha for twenty-measurement
series and several values of the autoregressive parameter. The
first column represents the most sensitive test for each error
distribution; the second contains the less sensitive one; and
third focuses on the proposed estimator.

                            Exponential error

                                               [delta]-
[[rho].sub.1]   FBackward     C statistic     recursive

-.6                 .8167           .7761         .8095
-.3                 .3368           .3215         .3321
.0                  .0506           .0501         .0500
.3                  .1981           .1803         .1986
.6                  .6694           .6345         .6674

                            Normal error

                                               [delta]-
[[rho].sub.1]    Least Sq     C statistic     recursive

-.6                 .7287           .6993         .7242
-.3                 .2307           .2251         .2308
.0                  .0488           .0491         .0485
.3                  .2262           .2182         .2253
.6                  .6677           .6575         .6606

                            Uniform error

                                               [delta]-
[[rho].sub.1]    Least Sq        Circular     recursive

-.6                 .7080           .6828         .7061
-.3                 .2210           .2112         .2227
.0                  .0505           .0505         .0507
.3                  .2145           .2048         .2135
.6                  .6431           .6186         .6369
COPYRIGHT 2010 Universidad de Valencia
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Solanas, Antonio; Manolov, Rumen; Sierra, Vicenta
Publication:Psicologica
Date:Jul 1, 2010
Words:8202
Previous Article:A multiple objective test assembly approach for exposure control problems in computerized adaptive testing.
Next Article:Cuando las senales simbolicas espaciales van antes de los numeros.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |