# Problems and solutions in conducting event studies.

Abstract

Event studies are a popular research paradigm in several fields of business research. They are now beginning to appear in insurance literature. This article enumerates the steps common to all event studies and discusses the guidance that has been provided by others for the problems encountered at each of those steps.

Introduction

The event study is one of the most popular statistical designs in finance. In 1987 and 1988, 14 event studies were published in the Journal of Finance and another 26 in the Journal of Financial Economics. During that period the Journal of Risk and Insurance published five event studies.

The event study will be used more frequently by insurance researchers as they become more familiar with its methodology and better recognize its applicability to insurance issues. The technique is particularly well suited to assessing the impact of insured or uninsured events on individual firms, or measuring the impact of market-wide events such as regulation or legislation on the market as a whole or on individual market (industry) segments.

Types of event studies vary. Market efficiency studies assess how quickly and correctly the market reacts to a particular type of new information. Information usefulness studies assess the degree to which company returns react to the release of a particular bit of news. Accounting scholars have used the information usefulness concept to assess the value of accounting information (see, e.g., Foster, 1973 and 1975, or Watts, 1973 and 1978). Such studies have also been used to assess the extent to which market participants were watching the accounting profession's policy making process (see, e.g., Basu, 1981, or Collins, Rozeff and Salatka, 1982).

Analogies to each of these topics can be found in recent Journal of Risk and Insurance articles. Sprecher and Pertl (1983) assess the impact that large losses had on shareholders' returns in a number of industries. Davidson, Chandy and Cross (1987) look at the same issue in the airline industry, where insurance is mandatory. Cross, Davidson and Thornton (1988) use the event study to examine the effect of director and officer lawsuits on firm value. Lacey (1988) examines the relationship between excess returns and property-liability firms' income to evaluate a collusion theory of the liability crisis. VanDerhei (1987) assesses the impact of voluntary termination of overfunded pension plans.

All these insurance studies follow a recent trend in event studies. They involve what Bowman (1983) calls metric explanation. In a metric explanation study, the event study is only the first step. Early studies explained the metrics (extra returns) by splitting the sample into different subsamples and examining whether the unusual element of returns differed among the subsamples. [1] More recent studies use excess returns as dependent variables in cross-sectional regressions to explain the source of the extra returns. [2]

These then are the basic types of event studies: market efficiency, information value, and metric explanation. These classifications are not mutually exclusive. It is quite common for event studies to combine a little of each. There are also methodology studies of the event study design, research that considers how best to run event studies. Event study methodology papers are unusual for business research where econometricians and statisticians typically use statistical theory to define how a test should be run. In event studies, the issues have been tested empirically, not theoretically, to find out what will work given the nature of financial data. Such investigations normally involve simulations. The researcher selects, or creates, [3] a hypothetical sample of securities, injects abnormal returns on arbitrarily defined event dates, and tests competing methodologies to ascertain which is better at detecting the event.

This article draws heavily upon the event study methodology literature to identify the problems that a researcher faces and to suggest the solutions that current knowledge provides for these problems. The presentation is as non-technical as is reasonable, with a minimum of symbolic notation. On each topic, interested readers are referred to applicable studies for greater detail.

First comes a general description of the event study design together with an outline of the research decisions, or problems. The balance of the article considers solutions to each of the problems, providing evidence and references for the suggested choices.

The Event Study Design

The earliest applications of the event study were by Fama, Fisher, Jensen and Roll [FFJR] (1969) and Ball and Brown (1968). FFJR can be characterized as an efficient market study, while Ball and Brown is an information usefulness study. FFJR investigated how quickly and correctly the market reacted to announcements of stock splits. Ball and Brown considered the value of companies' annual earnings announcements. After designating the information event of interest, both studies follow the event study process that has since become classic. The steps are as follows:

1 .Define the date upon which the market would have received the news.

2. Characterize the returns of the individual companies in the absence of this news.

3. Measure the difference between observed returns and "no-news" returns for each firm-the abnormal returns.

4. Aggregate the abnormal returns across firms and across time.

5. Statistically test the aggregated returns to determine whether the abnormal returns are significant and, if so, for how long.

Event Study Problems

The first, and potentially most important, choice an event researcher makes is what to study. Because event studies have become commonplace, the first hurdle an author faces with reviewers is convincing them of the merit of the project.

The event should be something of wide interest in the field. A good story is needed to explain anticipated market reaction to a particular bit of news. If the explanation of market reaction is obvious, time is of the essence. The researcher will be in a race to recognize the topic and get the research done and submitted. Even then there is the risk that reviewers will be unimpressed because of the obvious nature of the results.

Define Event Date

After defining the event, the researcher must determine when it took place. Timing of an event may seem obvious. It is not. The issue is not when an event occurred, but when the market, that is, its most interested and well informed segment, could have reasonably anticipated the news.

Characterize Normal Returns

Before discussing calculation of returns, some terms need defining. In event studies, it is important to distinguish between two periods. During the estimation period, estimates are derived. These estimates are used to define expected or normal returns for each firm during the event window. The event window is the event day plus and/or minus some number of days, weeks or months when the sample firms' returns are observed to see if anything unusual happened.

In event studies all time is kept relative to the event day. That day may or may not be the same calendar day for all firms. A study of stock splits, for example, would involve many different calendar days. By contrast, the study of a new regulation, or a policy change in accounting, might involve the same calendar event day for all firms. [4]

There are several approaches for characterizing a firm's normal returns. Some of the more popular include: mean returns, market returns, control portfolio returns, and risk-adjusted returns.

Mean Returns: In the mean return approach, a company is expected to generate the same return that it averaged during the estimation period.

Market Returns: In this approach, a company, in the absence of news, is expected to generate the same returns as the rest of the market during each day, week, or month of the event window.

Control Portfolio Returns: With control portfolios, the researcher selects a portfolio of companies that resemble the sample firms except for the absence of news about the firms in the control portfolio. The control firms might be in the same industry as the sample firms or they might be of the same risk (same beta). If controls are determined according to statistical criteria, such as beta, this similarity is assessed during some estimation period other than the event window. Abnormal return is the difference between the observed return of the sample firm and the return of the control portfolio for each day, week, or month during the event period.

Conditional, or Risk-adjusted Returns: This approach, which seems to be the current favorite, uses a regression model to predict expected returns for the firm. Abnormal returns, prediction errors, or residuals are defined to be the difference between the returns observed and those predicted by the regression model.

Calculate Excess Returns

Excess returns are the mathematical difference between observed returns and the returns predicted for that day, week, or month. When the company mean return is used as the prediction, these returns are referred to as mean-adjusted returns. When the return of the market is used as the "no-news" expectation, the difference is called a market-adjusted return. [5] When the regression model is used to predict a market-conditional return, the abnormal return may be given a variety of names. Some commonly encountered labels are abnormal returns, excess returns, prediction errors (PE), or, residuals.

Aggregate Excess Returns

Before testing, prediction errors (PE) must be aggregated both across firms and across time. Aggregation across firms generally involves a simple averaging of PEs for all firms in the sample on a given day, where days are counted in event time. Call this average, ARt, which is the average prediction error, or average residual, across all firms on day t, where t is measured relative to the event day; t = 0 is the event day; t - 1 is the day before, t + 1 is the day after.

Aggregation over time is most commonly a simple accumulation over so many event days. The cumulative abnormal return, CAR [.sub.t1,t2] is the sum of all of the ARs between t[.sub.1] and t[.sub.2].

Run Statistical Tests

The last step in the event study is to test aggregated returns statistically. Early event studies often used graphics as the primary method of interpretation. CAR plots were presented to show the reader how the market reacted to an event. Such pictures are still routine in event studies, but now they are expected to be supported with more rigorous statistical tests. This is an area where methodology studies provide helpful guidance.

Approaches to Event Study Problems

The problems in event studies cannot be solved as such. They can only be dealt with. The purpose of this discussion is to indicate what others have found to be important and what appears to work.

Event Dates

Brown and Warner [BW] (1980, 1985) conducted two landmarks simulation examinations of event study methodology. Their final conclusion was that even if a researcher doing an event study has a strong comparative advantage at improving existing methods, a good use of his time is still in reading old issues of the Wall Street Journal [WSJ] to more accurately determine event dates" (1980, p. 249).

BW emphasize the importance of carefully defining event dates because of the results of their power tests on uncertain event dates. The more days (or months, in the 1980 study) one must include in the event window, because of inability to pinpoint an event date, the lower the power of the event methodology (1980, pp. 224-27). Being able to use daily data and to specify the event day correctly serves to increase the statistical power of the event study technique (1985, pp. 13-14).

Misidentification of an event date can obscure an issue. Early merger studies used the date of a merger and found no significant evidence of shareholder return effects (see, e.g., Mandelker, 1974). When Asquith, Brunner and Mullins (1983) used the date on which the intent to merge was announced, however, they found significant ARs and CARS.

Using monthly returns, Pinches and Singleton (1978) and Griffin and Sanvicente (1982) found that bond rating changes convey little new information to the markets in that there is no noticeable reaction in the announcement month. When Glascock, Davidson and Henderson (1987) reexamined the issue using daily data, they found stronger (significant) announcement effects and some lag in adjustment to the new information.

The Glascock, Davidson and Henderson study provides another warning for those defining event dates. The WSJ does not publish all the news, and sometimes the news is out in other forms before it is published. Glascock, Davidson and Henderson, for example, found an average lag of three days between Moody's wire service rating change announcement and the time Moody's published the information. Only 34 percent of these bond reratings were published by the WSJ. Also, it is essential to find the earliest date of public disclosure when one is defining event dates. A WSJ index search is a start; it is not a finish.

Even though event date uncertainty will frequently be a problem, the event study design is still effective. Dyckman, Philbrick and Stephan (1984) found that testing accumulated excess returns over a slightly longer period allows a researcher to detect events without precisely pinpointing the timing of the event. [6]

Normal Returns

There are a number of issues, or problems, to deal with in calculating normal returns. One must decide how to measure returns and which approach to use to do so. If a regression approach is used, a particular model must be selected, the period over which the model is estimated must be selected, and there are potential econometric problems to deal with.

Return Calculation: Most event studies barely mention how they calculate returns. Beaver (1982) discusses the problem. Fama (1976, pp. 17-20) suggests that continuously compounded returns conform better to the normality assumptions underlying regression. Although they do not elaborate, BW (1985) indicate they got similar results with simple and continuously compounded returns. Thompson explicitly reports that "return form also does not seem to be an important consideration in event studies" (1988, p. 81).

Although it does not seem to make much difference, a large proportion of the event studies use continuously compounded returns. All that means is that returns are used in log form as follows:

R[.sub.jt] = 1n(1 + Return) (1)

where R[.sub.jt] = continuously compounded return on security j in period t.

Returns normally include both price changes and dividends so this equation can also be written:

R[.sub.jt] = 1n [(Price[.sub.t] + 1 + Dividend)/Price[.sub.t]] (1a) There are several reasons for using log transformed returns. Besides improving the normality of the return distribution, the transformation eliminates negative values and makes it easier to convert daily returns to weekly or monthly returns by taking a compound product, e.g., the return for a trading week is the product [Pi][.sub.t=1[.sup.5]] (1 + R[.sub.jt])[.sup.t], i.e., the product of five days' (1 + R[.sub.jt]) [assuming no holidays]. [7]

Excess Return Calculation

The basics of the mean-adjusted, market-adjusted, and control portfolio approaches have been discussed. Here we discuss only where these approaches might be useful and whether they work.

Mean-Adjusted Returns: Mean-adjusted returns are event period returns minus a constant, that constant being the average return for that firm during its estimation period. BW's simulations indicate that the technique works relatively well, producing results that are comparable to the more complicated regression models for both monthly (1980, p. 224) and daily returns (1985, p. 13). Dyckman, Philbrick and Stephan (1984, p. 18) ran multiple comparison tests and found that the mean-adjusted returns model did not work as well as a regression model, the market model.

Life would be easier if the mean-adjusted model could be used. Unfortunately, things are not that simple. BW find serious problems with the technique when a study involves calendar clustering, that is, when the event dates for all the firms are the same or fall close together. Klein and Rosenfeld (1987) recently found another problem. If many of the events occur during bull (bear) markets, this approach produces upward (downward) biased residuals. A similar problem occurs if an event frequently takes place after a company's stock has experienced a runup. [8] Then estimates of the firm's average return derived from a preceding estimation period would be upward biased and residuals consequently would be downward biased.

Market-Adjusted Returns: Market-adjusted returns are especially convenient. They involve no estimation process and no estimation period. To calculate market-adjusted returns, subtract the return on the market for the period from the return on the stock for the same period. This approach is helpful when there are no data prior to the event, as, for example, in the case of initial public offerings. [9] Further, BW (1980, p. 224; 1985, p. 13) suggest the power of the technique is comparable to that of the regression-based, market model approach. On the other hand, the Dyckman, Philbrick and Stephan (1984, p. 18) paired comparison test indicates that this model is less powerful than the market model. This approach also does not handle calendar clustering well.

Control Portfolio Returns: BW discuss a particular type of control portfolio where sample firms are assembled into unit-beta portfolios, i.e., portfolios with betas of one. Excess returns for the portfolio are then the return in excess of the market. BW (1980, pp. 223-49) report this technique to be less effective than the other three approaches because of the manner in which portfolio weights are defined (1980, pp. 238-39).

The control portfolio approach is not limited to matching betas to the market. It is easy to envision using industry indexes as proxies for the return on control portfolios. In that case excess returns could be measured as the difference between event company returns and an industry average excluding those firms with events. [10]

Given human capital investments in regression models, it is unlikely that control portfolios will gain popularity, but there is reason to consider this approach. Comparing to similar non-event companies controls for more than a statistical measure of risk, excess returns are easily interpreted, and definition of an estimation interval and the event day is less critical. The approach is convenient and does have merit, even though it is less frequently employed than the conditional, or risk-adjusted, returns model.

Conditional, or Risk-Adjusted, Returns: This approach dominates the event study literature. Some of the major issues to be resolved include: which regression model to use, how to define the predictor variables in the model selected, what to use as the estimation period, and which econometric problems to worry about.

Regression Models

Symbolic notation is the most efficient method of presenting the discussion. The more popular regression models include the following:

[Mathematical Expressions Omitted]

The single-index market model (SIMM):

[Mathematical Expressions Omitted]

All five regression models have appeared in the literature. There is no need to explain them all completely. Explanation of how each is applied, however, is appropriate before referring to Brenner (1979) to justify not belaboring them.

The SIMM is the simplest to use. Regress R[.sub.jt] on R[.sub.mt] during the estimation period to get [alpha[.sub.j]] and [Beta[.sub.j]]. Then during the event window predict R[.sub.jt] using the market returns during the window. Subtract predicted (or expected) E(R[.sub.jt]) from observed R[.sub.jt] to define excess returns. The excess returns model works the same way except that risk-free rates are subtracted from R[.sub.jt] and R[.sub.mt] before running the regressions.

The zero beta application is slightly different. Fama and MacBeth (1973) ran cross-sectional regressions to get estimates of [lambda[.sub.0t]] and [lambda[.sub.t]] for the period January 1935 to June 1968, which they have supplied to a number of researchers. [12] With the Fama-MacBeth approach, a researcher can allow betas to vary from one t to the next. Therefore, beta could be estimated with a time series process.

With the own variance model, beta is not the only measure of risk; variance is also considered to explain part of R[.sub.jt] Application of this approach is analogous to the zero beta model.

Fortunately for the purposes of this review, Brenner (1979) has examined these four approaches. He found that the simplest, the SIMM, did essentially as well as the more complicated approaches.

The last model presented, the multiple index model (6), is not a specific model but a generic description of regression models where researchers add indexes for influences other than the market. For example, Langetieg (1978) added an industry factor in his merger study, finding it important for performance measurement. Thompson (1988) evaluated the use of industry indexes in a BW type simulation study and found them not to increase the power of the event study.

Because of the interest rate sensitivity of bank stock returns, interest rate indexes are beginning to appear in event studies involving banks. [13] Given the nature of insurance company income, this may work in that industry as well. Whether this is appropriate is an empirical issue.

Measurement of Independent Variables: The SIMM is the most frequently used regression model. It has only one independent variable - R[.sub.mt]. The issue is whether this index should be calculated on a value- or equal-weighted basis. Theory says to use a value-weighted index because that most appropriately reflects total market performance. The arguments for a value-weighted index can be found in Roll (1981) and Ohlson and Rosenberg (1982). Unfortunately, using an equally weighted index is more likely to detect abnormal returns. Evidence is provided by BW (1980), and an explanation is provided by Peterson (1989).

Estimation Period: There are essentially four choices for the estimation period: before the event window, during the window, after the window, and around the window. The majority of events studies use an estimation period before the event. Appropriate disclaimers are included to assure the reader that the window was wide enough to avoid contaminating the regression. [14]

Market-adjusted returns essentially involve estimation during the window by assuming alpha is zero and beta is one. The control portfolio approach also involves estimation during the window, although construction of the control portfolio normally is based on data from before or after the event window.

A strong case can be made for a post-event estimation period if the event is expected fundamentally to alter the firms' sensitivity to market returns. If the event is important enough to change alpha and beta, then values from before the event are not appropriate. Evidence of alpha or beta changes may be used as evidence of an event.

The problem of alpha and beta shifts can be handled by using an estimation period around the window and testing for parameter shifts. The Gujarati (1970) technique is especially convenient for such tests, as follows:

[Mathematical Expression Omitted]

where everything is exactly the same as equation (2) except that the dummy variable, D, equals one after the event and zero before. If [Delta][alpha][.sub.j] and/or [Delta][Beta[.sub.j]] is statistically different from zero, there has been a parameter shift. Observations on R[.sub.jt] and R[.sub.mt] during the window are excluded. If there is evidence or reason to question parameter stability, then a post-event estimation period may be appropriate. 15 Arguments for such a process can be found in Stickel (1984).

Econometric Problems: Regression models are based on a number of statistical assumptions. Specifically, the models assume that the residuals: are normally distributed with a mean of zero, are not serially correlated, have a constant variance, and are not correlated with the explanatory variables. Further, in finance where several different regressions may be run for different securities, it is also assumed that there is no correlation between residuals for the different firms. [16]

In business studies, there is reason to be concerned about each of these assumptions. Security returns are not normally distributed, a problem that is worse in the case of the daily returns that are increasingly popular in event studies. There is evidence of slight serial correlation in security returns, which can be more pronounced in the residuals if observations are less than perfectly matched (nonsynchronous trading). There is evidence that variance shifts sometimes are associated with financial events. There is evidence that the residuals are correlated with values of the independent variable, R[.sub.mt]. When a study involves calendar clustering, there is evidence of contemporaneous covariance between residuals of different firms. Fortunately, the event study design appears to be robust to most of these problems, or techniques have been developed to handle them.

Nonnormality: The nonnormality problem is potentially more troublesome for studies using daily data. As reported by BW (1985, pp. 8-10) and replicated by Berry, Gallinger and Henderson (1990), daily returns are non-normal. Fortunately, the same is not true of the residuals. Either the distribution of the residuals is close enough to normal that such a null cannot be rejected (see Berry, Gallinger and Henderson, 1990), or the simulation results indicate that there is no gain in power by using distribution-free test statistics (BW, 1985, p. 25).

Serial Correlation and Nonsynchronous Trading: There is statistically significant autocorrelation in the residuals from SIMM regressions using daily data. It is possible that nonsynchronous trading and the induced bias in betas might be part of the problem (e.g., see BW, 1985, p. 19).

Nonsynchronous trading refers to the mismatching of the values for R[.sub.mt] and R[.sub.jt]. Normally, R[.sub.mt] would be an index created by averaging all R[.sub.jt] at their last trade price on day t. Some of the R[.sub.jt] would not have traded at or near the end of the trading day. Consequently, the individual R[.sub.jt] used in the SIMM regressions would in some cases involve observations that were not exactly end of day values. Further, the R[.sub.mt] is made up of R[.sub.jt], some of which are not end of day values. This induces a bias in the betas of individual securities. The result is that the betas of infrequently traded securities are downward biased, while shares trading with more than average frequency have upward biased betas.

Two widely recognized techniques have been suggested to correct for the bias. Scholes and Williams [SWI (1977) estimate lead, lagged, and contemporaneous betas separately and take a weighted average of these estimates. Dimson (1979) runs a one pass regression of R[.sub.jt] on R[.sub.mt], and a number of leading and lagging values of R[.sub.mt] and adds up the coefficients. Fowler and Rorke (1983) show the two approaches to be equivalent after making a minor correction in the Dimson procedure.

Unfortunately, the extra work does not seem to strengthen event study results. Reinganum (1982) and Theobald (1983) do not find the SW and Dimson betas significantly better than OLS estimates using daily data. The procedures do not eliminate the autocorrelation in event study residuals, [17] and they do not improve the power of event studies in simulation studies. [18] In their daily returns study, BW tried a simple autocorrelation adjustment procedure based on a revised test statistic, but it did not help much. [19]

In general, correction for autocorrelation of residuals appears unwarranted. Weak-form efficient market tests document the limited autocorrelation in security returns. Autocorrelation in the residuals is even smaller and appears to pose little problem for event studies.

Variance Shifts: Beaver (1968) and Patell and Wolfson (1979) provide evidence of variance shifts coincident with financial events. BW (1985, pp. 23-25) discuss the problems caused by variance shifts and suggest the use of a cross-sectional estimate of the variance following Penman (1982). The details are spelled out in their appendix (1985, p. 28).

As BW point out, there are costs to using such cross-sectional measures. Such a calculation implicitly assumes that the variance for every firm is the same on day t and the estimates ignore estimation period data. Consequently, BW suspect the procedure will be less powerful and their simulation results confirm their suspicions.

Collins and Dent (1984) examine potentially more powerful techniques that employ a generalized least squares (GLS) model and allow for variance shifts and cross-correlation among different firms' residuals. Their results are encouraging, although their models assume that the scale of the variance shift is estimable for all firms in the sample. Collins and Dent also evaluate techniques that handle correlation between residuals and the independent variable in event studies - the return on the market.

Correlation Between Residuals and R[.sub.mt]: If the type of event under study has a greater probability of occurring in a bull market than a bear market, it creates a problem. If expected residuals are based on an estimation interval where the market was not doing as well, the conditional expectation of R[.sub.jt] is misspecified, and that misspecification is induced into the excess return calculation.

Patell (1976) was the first to correct for this. He used a forecast error rather than a standard deviation to test whether the excess return was statistically significant. Further, he used standardized excess returns in the aggregation process. The Patell article has been heavily cited in the accounting literature. Dodd and Warner (1983) introduced these metrics in the finance literature, and their article is also widely cited as a precedent for the use of these test statistics. [20]

Collins and Dent (1984) test the Patell statistic (and other techniques). This simple modification performs well when there is no event clustering and there are relatively small shifts in residual variances. If these two problems are pronounced, more sophisticated estimation models are necessary to preserve statistical power. [21]

Event Clustering: Clustering is one of the more troubling problems in event studies. It occurs in a number of forms, and it is easy to envision insurance studies where all types of clustering would be encountered.

Calendar clustering refers to events occurring at or near the same time. Calendar clustering is also called event clustering. Both BW studies recognize the problems caused by calendar clustering, and researchers have since expended considerable effort trying to handle the problem. A lot of work has been done in accounting because so many accounting studies involve the calendar clustering problem. [22]

Industry clustering refers to events concentrated in the same industry. Both it and event clustering reduce power. [23]

Risk Clustering refers to events all occurring to companies with similar betas. With risk clustering, the conditional return models perform reasonably well. [24] Supposedly, abnormal performance is easier to detect in lower-risk (beta) companies. [25] Ability to detect abnormal performance in low-risk industries presumes events are diversified over market phase. During boom conditions, any abnormal performance in low market-sensitivity companies could be swamped by general market movements. [26]

Two different approaches have been used to handle the contemporaneous covariance, or event clustering, problem. Earlier approaches involved modification of the test statistics. Recently, emphasis has been on using regression models that incorporate estimates of the contemporaneous covariance in the estimation of the regression coefficients. These estimation procedures have been labeled differently, depending upon the author and the particular nuances the author is emphasizing. The approach has been called joint generalized least squares (GLS) by Collins and Dent (1984) and Collins, Rozeff and Salatka (1982), estimated GLS by Thompson (1985), and multivariate regression model (MVRM) by Binder (1985). The basic technique is an application of Zellner's (1962) seemingly unrelated regression, which was suggested for use in event studies by Gibbons (1980), although it had been used earlier in economics. [27]

Conceptually, the application is not too difficult. Rather than assume residuals across equations are independent, it is assumed that they are correlated and that the correlation process is stable. A set of first pass regressions are run and a variance-covariance structure is estimated across and between the system of equations. The variance-covariance matrix is then used in a transformation of the observation matrix to derive estimates of the regression parameters.

Researchers have gone a step further in the use of the technique. They have added dummy variables during the event period (or on individual event days) to estimate the event parameters. The resulting system of equations looks as follows:

[Mathematical Expressions Omitted]

One advantage of joint GLS is that forecast error estimates take into account contemporaneous covariance. Further, as Thompson (1985) explains, GLS allows joint tests, i.e., testing whether any one of the prediction errors ([gamma] in this model) is significant for the individual companies, not just for the portfolio of companies as a whole.

Binder (1985) provides a further simplification of the model. If one wants all sample firms to be accorded equal weight in the inferences, equation (8) can be collapsed to a portfolio model as follows:

[Mathematical Expression Omitted]

where all variables are exactly as in (8) except that they refer to the portfolio. The value [gamma][.sub.pi] is the same as the AR in the simpler event studies. The difference is that the forecast error of [gamma][.sub.pi] considers the contemporaneous covariance between [epsilon][.sub.jt]

It is possible that the researcher will want to give different weights to the various firms. One possibility is to weight firms with respect to the quality of the conditional return forecasts. More simply, run single equation regressions of the SIMM, then weight the companies in the portfolio according to the R[.sup.2] of these regressions. [28]

This is a relatively new line of thinking in event study research. How effective it will be is not yet clear. Collins and Dent (1984) find merit in the technique, especially when encountering significant cross-correlations and shifts in residual variance. On the other hand, Malatesta (1986, p. 38) reports that "there is no evidence that joint GLS provides a practical payoff in the typical event study context." [29]

Measuring Excess Returns

This is one area where the solution is relatively straightforward. The excess return in period t (day t for simplicity) is the difference between R[.sub.jt] (observed) and E(R[.sub.jt]) (predicted based on one of the regression models-or the firm's estimation period average or the concurrent market return in the two simpler cases).

Aggregating Excess Returns

Although there are probably others, there are three common methods of aggregation: cumulative abnormal return (CAR), abnormal performance index (API), and standardized cumulative prediction error (SCPE).

CAR and API both start by averaging prediction errors across firms on day t; again, ARt is the arithmetic average excess return, or prediction error, on day t. After that, CAR and API are cumulative sum and geometric product, respectively, as follows:

[Mathematical Expressions Omitted]

Ball and Brown (1968) used the API to assess the value of accounting income numbers. API[.sub.t,T] is the extra return one could make over the period from t to T by acting on the information at time t. The API is primarily an expository device. [10] It is not truly a test statistic, although Winsen (1977) suggests how to convert the API to a test statistic.

Because of its testability, the CAR is more widely employed. It is the measure used by FFJR, and it has withstood the test of time. Recent practice has been to standardize prediction errors before aggregation and accumulation. This process changes the meaning of the aggregated value and simultaneously creates a new set of test statistics.

Statistical Tests

There are two issues in statistical testing: (1) whether to use parametric or nonparametric tests, and (2) after making that decision, deciding which test statistics to employ.

Parametric Versus Nonparametric Tests: In their monthly study, BW report "the differences between the empirical frequency distribution of the test statistics and the t-distribution are not large. On the other hand, certain non-parametric tests used in event studies are not correctly specified" (1980, pp. 248-49).

Because of the nature of daily data, BW reconsidered the issue when examining daily data. They again report "methodologies based on the OLS market model and using standard parametric tests are well specified under a variety of conditions" (1985, p. 25). Dyckman, Philbrick and Stephan (1984, p. 29) say "the nonnormality of individual-security daily-return residuals has little effect on the inferences drawn from the use of the t-test applied to portfolios."

Berry, Gallinger and Henderson (1990) specifically examine the choice of parametric versus nonparametric tests with daily data. They also find parametric t-tests to work well. Further, the nonparametric tests do not. The empirical sampling distributions of these tests do not follow the theoretical distributions.

The guidance from these studies is clear. Nonparametric tests are an unnecessary complication and do not work well. The choice is the simple t test or, for aggregated excess returns, tests based on sums of t's or sums of squared t's.

The t-test can be found in any statistics book. Expressed in terms of an event study, for a single day, it would be:

Student t = (AR[.sub.t] - 0)/s[.sub.t] = Z (12)

where Z is the test statistic. The student t approaches a Z if the number of observations is large (which is generally the case in event studies). Besides, t is typically used as a time counter, and the use of Z reduces confusion.

The issues in event studies are how to calculate s[.sub.t] the estimated standard deviation, how to test aggregated ARs (or PEs), and whether to standardize before aggregation.

Continuing with the notation from equation (2), the errors from the estimation period are [epsilon][.sub.jt] where j is the company and t is the observation period (day), and t = 1, k (there are k days in the estimation period). The estimated standard deviation for one firm's one-day prediction error would be:

The variance for the CAR is found by summing the s[.sub.a[.sup.2]] and dividing by the number of companies. The standard deviation is the square root of the variance.

More recent event studies do not test CARS. Instead, ARs are standardized before they are aggregated, and the standardized aggregates form the basis of the test statistics. BW (1985, p. 28) illustrate the technique when discussing tests assuming cross-sectional dependence. What they refer to as the standardized excess return (SER) is calculated as:

SER[.sub.t] = AR[.sub.t]/s[.sub.at] (15)

where t denotes the sat s for a particular day. The test statistic for N companies on that day would be:

[Mathematical Expression Omitted]

This approach to aggregation appears to be the current trend, but the application includes one more improvement. The estimate for the standard deviation on a one-day return is no longer s[.sub.j]. The estimate (s[.sub.j]) assumes that variation in the market during the event period is essentially the same as it was during the estimation period. Further, this estimate does not adjust for the number of observations in the estimation interval. These added considerations were included in the standard error of the forecast (s[.sub.ft]), originally proposed by Patell (1976) and popularized in the finance literature by Dodd and Warner (1983). The two values, s[.sub.j] and s[.sub.ft], are related as follows: [32]

[Mathematical Expression Omitted]

The second term under the radical (1/k) adjusts for the length of the estimation interval-the greater k, the less the error in the out of sample forecast (assuming stability in the relationship). The last term accounts for the induced error if R[.sub.mt] varies significantly from what was observed in the estimation interval, where R[.sub.m] is the average value of R[.sub.mt] in that interval. [33]

The standardized prediction error (SPE) is the PE divided by s[.sub.ft], which makes the SPE approximately unit normal, N(0, 1):

SPE[.sub.jt] = PE[.sub.jt]/s[.sub.ft] (18)

The standardized cumulative prediction error for firm j is the sum of the SPE[.sub.j] between any two days of interest, adjusted for the number of days (m) being considered, starting at t[.sub.1] and ending at t[.sub.2] (subscripts t[.sub.1] and t[.sub.2] are dropped to simplify notation):

[Mathematical Expression Omitted]

The test statistic for N firms is the sum of the SCPE[.sub.j] divided by the square root of the number of firms:

[Mathematical Expression Omitted]

and Z is N(0, 1). This test statistic is proving quite popular, as examination of recent finance journals will document. Collins and Dent's (1984) simulation studies indicate that it works quite well in the absence of variance shifts or contemporaneous covariance. [34]

Examination of equation (18) suggests that this test statistic is essentially the same as a student t (or Z) test on a regression coefficient. Such is the case, which provides the transition to an even more convenient form of event study. Adding dummy variables to the market model creates what has been labeled the event parameter model as follows:

[Mathematical Expression Omitted]

where there are (t[.sub.2] - t[.sub.1]) dummy variables each taking a value of one on one event period day and zero otherwise. In this model, [lambda][.sub.jt] is equal to PE[.sub.jt], and its t (or Z) value from the regression printout is the SPE[.sub.jt]. [35]

If contemporaneous covariance is suspected to be a problem, the analysis is amenable to being run using seemingly unrelated regression. As a practical matter, with a large sample size this requires considerable computer capacity. However, if the signs on [epsilon][.sub.jt] are expected to be the same for all j, or the sample can be partitioned to where this is a reasonable approximation, equation (21) can be run as a single portfolio [36] in the manner suggested in equation (9).

Using an equally weighted portfolio is equivalent to testing ARs and CARS, equally weighted averages. Sefcik and Thompson (1986) examine other weighting schemes.

These newer econometric approaches do not change the logic of the event study. The ability to run the analysis as a more sophisticated regression allows the researcher to control for a greater variety of statistical problems. Such advances may reduce the problems of industry and calendar clustering and residual variance shifts. [37] The staying power of event study research is at least partially attributable to its ability to examine complex issues in complex settings and still provide simple, intuitively appealing results. Greater precision in testing can only enhance this attribute.

Summary and Conclusion

The sheer volume of event study literature can be imposing to researchers first considering use of the paradigm. Yet an examination of the process reveals that the similarities between various event studies are greater than the differences. All event studies follow a well-defined series of steps. For each of these steps, there is existing literature to suggest how to handle the choices involved.

The event study is a classic design. Classic designs are simple and elegant, and, above all else, functional. The event study has become a classic because it works. It can be used under less than perfect conditions and still produce reliable results.

Event study methodology research provides one consistent lesson: even the simplest versions of the event study design work. The more specialized designs may be necessary for troublesome situations, but for most applications the simpler versions do nicely.

The event study is a popular chronological frame of reference for scholarly evaluation of financial events. What is done with the information it provides it what is important.

Use of reliable shared benchmarks allows us to communicate our findings with less concern about the language of our science and reduces duplication of effort. The event study is a very serviceable design. It is easy to learn to use, it is reliable, and it is easy to interpret. These attributes should keep it from going out of style any time soon.

1 Davidson, Chandy and Cross (1987, pp. 167-69) use this approach.

2 Cross, Davidson and Thornton (1988), Lacey (1988), and VanDerhei (1987) all do this.

3 A study using security returns created using a random number generator and empirically based parameter estimates is referred to as a Monte Carlo simulation.

4 In such a case, there are additional econometric problems. Simpler event study designs depend upon calendar diversification to eliminate across-sample residual correlations.

5 Davidson, Chandy and Cross (1987, p. 167) call this the average return model.

6 Dyckman, Philbrick and Stephan (1984, pp. 16-18) also examine whether using longer return differencing helps, i.e., whether using two- or three-day returns rather than one-day is more effective. Using the single day procedure is just as effective.

7 This can be especially helpful in studying intervalling effects in event studies. See Dowen and Isberg (1988).

8 Evidence suggests some financial events do frequently follow stock runups. Mandelker (1974) suggests that mergers frequently follow runups, although later studies cast some doubt on this. Dann and Mikkelson (1984), Asquith and Mullins (1986), and Mikkelson and Partch (1986) provide evidence showing that new security offerings often follow runups.

9 See Chalk and Peavy (1987).

10 Gonedes, Dopuch, and Penman (1976, p. 113) discuss using control portfolios to control for factors other than beta.

11 From the form of equation (3) it is possible to see that the CAPM is reconcilable with the SIMM if betas and interest rates are relatively stable. Equation (3) was used in Brenner's (1979) evaluation.

12 One could estimate such parameters for later years following their approach, although the process is quite computer-intensive.

13 Stone (1974) suggests a two-factor model that was tested by Lloyd and Shick (1977). Booth, Officer and Henderson (1985) show bank betas to be interest sensitive. Flannery and James (1984) find interest sensitivity to be priced in bank returns, and Sweeney and Warga (1986) report that this is also true for utilities.

14 The mean-adjusted returns model normally is based on a before the event estimation period the mean referred to is the company's average return before the event.

15 Cross, Davidson and Thornton (1988) use a post-estimation process as validation for their results from a pre-event estimation period.

16 See Fogler and Ganapathy (1982) for a review of econometrics from a specifically finance perspective.

17 See BW (1985, p. 19) and Berry, Gallinger and Henderson (1987, p. 55).

18 See BW (1985, pp. 16-17), Dyckman, Philbrick and Stephan (1984), and Atchison (1986).

19 BW (1985, p. 20).

20 From 1976 through 1988, the Patell article has been cited 94 times. Through 1988, Dodd and Warner have been cited 63 times. These figures are based on the Social Science Citation Index.

21 Collins and Dent (1984) compare the efficacy of Jaffe's (1974) test statistic, which considers contemporaneous covariances, and their own GLS measure, which allows for both contemporaneous covariance and shifts in residual variance.

22 See Dyckman, Philbrick and Stephan (1984, p. 2) for elaboration.

23 See Dyckman, Philbrick and Stephan (1984).

24 BW (1980, p. 238) reported that even the mean-adjusted returns model did well with monthly data; the control portfolio technique did not.

25 See, e.g., BW (1980, p. 236) and Dyckman, Philbrick and Stephan (1984, p. 25).

26 To the extent that events are calendar, industry, risk and market-condition clustered, the market-adjusted returns approach could have trouble isolating excess returns.

27 The SUR analysis process is explained in Theil (1971).

28 Sefcik and Thompson (1986) discuss the nuances of portfolio weighting schemes and the details of constructing portfolios where cross-correlations are also taken into account.

29 Malatesta's analysis is an assessment of GLS but not of GLS with event clustering. Malatesta's event days are not systematically clustered.

30 See Marshall (1975).

31 This general approach is suggested by BW (1980, p. 250), extended by Jaffe (1974), and tested by Collins and Dent (1984).

32 Mikkelson and Partch (1988) provide a mathematically equivalent but slightly more convenient calculation for s[.sub.ft].

33 If contemporaneous covariance is a problem, s,, can be estimated cross-sectionally and the proposed refinements in s, still hold.

34 Karafiath and Spencer (1989) report that the Dodd and Warner (or Patell) statistic is biased toward rejection of the null of no significant excess return if the event window is long relative to the estimation period. The cause of the bias is the cross-correlation among the residuals, which this test statistic assumes to be zero. The Patell chi square statistic based on the squared t (or Z) values does not seem to suffer the same problem.

35 See Thompson (1985) and Karafiath (1988) for elaboration.

36 See Binder (1985), for example.

References

1. Asquith, Paul, Robert F. Brunner and David W. Mullins, Jr., 1983, The Gains to Bidding Firms from Mergers, Journal of Financial Economics, 11: 121-39.

2. Asquith, Paul and David W. Mullins, Jr., 1986, Equity Issues and Offering Dilution, Journal of Financial Economics, 15: 61-90.

3. Atchison, Michael D., 1986, Non-Representative Trading Frequencies and the Detection of Abnormal Performance, Journal of Financial Research, 9: 343-48.

4. Ball, Ray and Phillip Brown, 1968, An Empirical Evaluation of Accounting Income Numbers, Journal of Accounting Research, 6: 159-78.

5. Basu, Sanjoy, 1981, Market Reactions to Accounting Policy Deliberations: The Inflation Accounting Case Revisited, The Accounting Review, 56: 942-54.

6. Beaver, William H., 1968, The Information Content of Annual Earnings Announcements, Journal of Accounting Research, Supplement, 6: 67-92.

7. Beaver, William H., 1982, Discussion of Market-Based Empirical Research in Accounting: A Review, Journal of Accounting Research, Supplement, 20: 323-31.

8. Berry, Michael A., George W. Gallinger and Glenn V. Henderson, Jr., 1990, Using Daily Stock Returns in Event Studies and the Choice of Parametric versus Non-Parametric Test Statistics, Quarterly Journal of Business and Economics, 29: 70-85.

9. Binder, John J., 1985a, On the Use of the Multivariate Regression Model in Event Studies, Journal of Accounting Research, 23: 370-83.

10. __ 1985b, Measuring the Effects of Regulation with Stock Price Data, Rand Journal of Economics, 16: 167-83.

11. Black, Fisher, 1972, Capital Market Equilibrium with Restricted Borrowing, Journal of Business, 45: 444-55.

12. Booth, James R., Dennis T. Officer and Glenn V. Henderson, Jr., 1985, Commercial Bank Stocks, Interest Rates and Systematic Risk, Journal of Economics and Business, 37: 303-10.

13. Bowman, Robert G., 1983, Understanding and Conducting Event Studies, Journal of Business Finance and Accounting, 10: 561-84.

14. Brenner, Menachem, 1977, The Effect of Model Misspecification on Tests of the Efficient Market Hypotheses, Journal of Finance, 32: 57-66.

15. __ 1979, The Sensitivity of the Efficient Market Hypothesis to Alternative Specifications of the Market Model, Journal of Finance, 34: 915-29.

16. Brown, Stephen J. and Jerold B. Warner, 1980, Measuring Security Price Performance, Journal of Financial Economics, 8: 205-58.

17. __ 1985, Using Daily Stock Returns: The Case of Event Studies, Journal of Financial Economics, 14: 3-32.

18. Burgstahler, David and Eric W. Noreen, 1986, Detecting Contemporaneous Market Reactions to a Sequence of Related Events, Journal of Accounting Research, 24: 170-86.

19. Chalk, Andrew J., and John W. Peavy, Ill, 1987, Initial Public Offerings: Daily Returns, Offering Types and the Price Effect, Financial Analysts Journal, 43: 65-9.

20. Chu, Chen-Chin, Edward L. Bubnys and C.F. Lee, 1987, Estimates of the Expected Market Risk Premium: Empirical Analysis, paper presented at the Joint National Meeting of ORSA/TIMS.

21. Collins, Daniel W. and Warren T. Dent, 1984, A Comparison of Alternative Testing Methodologies Use in Capital Market Research, Journal of Accounting Research, 22: 48-84.

22. Collins, Daniel W., Michael S. Rozeff and William K. Salatka, 1982, The SEC's Rejection of SFAS No. 19: Tests of Market Price Reversal, The Accounting Review, 57: 1-17.

23. Cross, Mark L., Wallace N. Davidson, Ill and John H. Thornton, 1988, Taxes, Stock Returns and Captive Insurance Subsidiaries, Journal of Risk and Insurance, 55: 331-38.

24. Dann, Larry Y. and Wayne H. Mikkelson, 1984, Convertible Debt Issuance, Capital Structure Change and Financing-Related Information, Journal of Financial Economics, 13: 157-86.

25. Davidson, Wallace N., III, P.R. Chandy and Mark Cross, 1987, Large Losses, Risk Management and Stock Returns in the Airline Industry, Journal of Risk and Insurance, 54: 163-72.

26. Dimson, Elroy, 1979, Risk Measurement When Shares are Subject to Infrequent Trading, Journal of Financial Economics, 7: 197-226.

27. Dodd, Peter and Jerold B. Warner, 1983, On Corporate Governance, Journal of Financial Economics, 11: 401-38.

28. Dowen, Robert J. and Steven C. Isberg, 1988, Re-examination of the Intervalling Effect on the CAPM Using a Residual Return Approach, Quarterly Journal of Business and Economics, 27: 114-29.

29. Dyckman, Thomas, Donna Philbrick and Jens Stephan, 1984, A Comparison of Event Study Methodologies Using Daily Stock Returns: A Simulation Approach, Journal of Accounting Research, 22: 1-33.

30. Fama, Eugene F., 1976, Foundations of Finance (New York, Basic Books).

31. Fama, Eugene F., Lawrence Fisher, Michael Jensen and Richard Roll, 1969, The Adjustment of Stock Prices to New Information, International Economic Review, 10: 1-21.

32. Fama, Eugene F. and James D. MacBeth, 1973, Risk, Return and Equilibrium: Empirical Tests, Journal of Political Economy, 71: 607-636.

33. Flannery, Mark J. and Christopher M. James, 1984, The Effect of Interest Rate Changes on the Common Stocks of Financial Institutions, Journal of Finance, 39: 1141-53.

34. Fogler, H. Russell and Sundaram Ganapathy, 1982, Financial Econometrics (Englewood Cliffs, NJ, Prentice-Hall, Inc.).

35. Foster, George, 1973, Stock Market Reaction to Estimates of Earnings Per Share by Company Officials, Journal of Accounting Research, 11: 25-37.

36. __ 1975, Security Price Revaluation Implications of Sub-Earnings Disclosure, Journal of Accounting Research, 13: 283-92.

37. Fowler, David J. and C. Harvey Rorke, 1983, Risk Measurement When Shares are Subject to Infrequent Trading: Comment, Journal of Financial Economics, 12: 279-83.

38. Gibbons, Michael R., 1980, Econometric Models for Testing a Class of Financial Models-An Application of the Nonlinear Multivariate Regression Model, Ph.D. dissertation, University of Chicago.

39. Glascock, John L., Wallace N. Davidson, Ill and Glenn V. Henderson, Jr., 1987, Announcement Effects of Moody's Bond Rating Changes on Equity Returns, Quarterly Journal of Business and Economics, 26: 67-78.

40. Gonedes, Nicholas J., Nicholas Dopuch and Stephen J. Penman, 1976, Disclosure Rules, Information Production, and Capital Market Equilibrium: The Case of Forecast Disclosure Rules, Journal of Accounting Research, 14: 89-137.

41. Griffin, Paul A. and Antonio Z. Sanvicente, 1982, Common Stock Returns and Rating Changes: A Methodological Comparison, Journal of Finance, 37: 103-19.

42. Gujarati, D., 1970, Use of Dummy Variables in Testing for Equality Between Sets of Coefficients in Linear Regression, American Statistician, 24: 18-21.

43. Jaffe, Jeffrey F., 1974a, The Effect of Regulation Changes on Insider Trading, The Bell Journal of Economics and Management Science, 93-121.

44. __ 1974b, Special Information and Insider Trading, Journal of Business, 47: 410-28.

45. Karafiath, Imre, 1988, Using Dummy Variables in the Event Methodology, Financial Review, 23: 351-57.

46. Karafiath, Imre and David E. Spencer, 1989, Statistical Inference in Multi-Period Event Studies: An Assessment, working paper, authors currently at University of North Texas and Massachusetts Institute of Technology (visiting scholar), respectively.

47. Klein, April and James Rosenfeld, 1987, The Influence of Market Conditions on Event Study Residuals, Journal of Financial and Quantitative Analysis, 22: 345-51.

48. Lacey, Nelson J., 1988, Recent Evidence on the Liability Crisis, Journal of Risk and Insurance, 55: 499-508.

49. Langetieg, Terrence C., 1978, An Application of a Three-Factor Performance Index to Measure Stockholders' Gains from Merger, Journal of Financial Economics, 6: 365-83.

50. Lloyd, William P. and Richard A. Shick, 1977, A Test of Stone's Two-Index Model of Returns, Journal of Financial and Quantitative Analysis, 12: 363-76.

51. Malatesta, Paul H., 1986, Measuring Abnormal Performance: The Event Parameter Approach Using Joint Generalized Least Squares, Journal of Financial and Quantitative Analysis, 21: 27-38.

52. __and Rex Thompson, 1985, Partially Anticipated Events, Journal of Financial Economics, 14: 237-50.

53. Mandelker, Gershon, 1974, Risk and Return: The Case of Merging Firms, Journal of Financial Economics, 4: 303-35.

54. Marshall, Ronald M., 1975, Interpreting the API, The Accounting Review, 50: 99-111.

55. Mikkelson, Wayne H. and M. Megan Partch, 1986, Valuation Effects of Security Offerings and the Issuance Process, Journal of Financial Economics, 15: 31-59.

56. Ohlson, James A. and Barr Rosenberg, 1982, Systematic Risk of the CRSP Equal-Weighted Common Stock Index: A History Estimated by Stochastic-Parameter Regression, Journal of Business, 55: 121-45.

57. Patell, James M., 1976, Corporate Forecasts of Earnings Per Share and Stock Price Behavior: Empirical Tests, Journal of Accounting Research, 14: 246-76.

58. __ 1979, The API and the Design of Experiments, Journal of Accounting Research, 17: 528-49.

59. Patell, James M. and Mark A. Wolfson, 1979, Anticipated Information Releases Reflected in Call Option Prices, Journal of Accounting and Economics, 1: 117-40.

60. Penman, Stephen H., 1982, Insider Trading and the Dissemination of Firms' Forecast Information, Journal of Business, 55: 479-503.

61. Peterson, Pamela P., 1989, Event Studies: A Review of Issues and Methodology, Quarter Journal of Business and Economics, 28: 36-66.

62. Pinches, George E. and J. Clay Singleton, 1978, The Adjustment of Stock Prices to Bond Rating Changes, Journal of Finance, 33: 29-44.

63. Reinganum, Mark R., 1982, A Direct Test of Roll's Conjecture on the Firm Size Effect, Journal of Finance, 37: 27-35.

64. Roll, Richard, 1981, A Possible Explanation of the Small Firm Effect, Journal of Finance, 36: 879-88.

65. Scholes, Myron and Joseph Williams, 1977, Estimating Betas from Nonsynchronous Data, Journal of Financial Economics, 5: 309-28.

66. Sefcik, Stephan E. and Rex Thompson, 1986, An Approach to Statistical Inference in Cross-Sectional Models with Security Abnormal Returns as Dependent Variable, Journal of Accounting Research, 24: 316-34.

67. Sprecher, C. Ronald and Mars A. Pertl, 1983, Large Losses, Risk Management and Stock Prices, Journal of Risk and Insurance, 50: 107-117.

68. Stickel, Scott E., 1984, The Effect of Preferred Stock Rating Changes on Preferred and Common Stock Prices, working paper, University of Chicago.

69. Stone, Bernell K., 1974, Systematic Interest Rate Risk in a Two-Index Model of Returns, Journal of Financial and Quantitative Analysis, 9: 709-21.

70. Sweeney, Richard James and Arthur D. Warga, 1986, The Pricing of Interest-rate Risk: Evidence from the Stock Market, Journal of Finance, 41: 393-410.

71. Theil, H., 1971, Principles of Econometrics (New York, John Wiley and Sons).

72. Theobald, Michael, 1983, The Analytic Relationship Between Intervalling and Nontrading in Continuous Time, Journal of Financial and Quantitative Analysis, 18: 199-209.

73. Thompson, Joel E., 1988, More Methods that Make Little Difference in Event Studies, Journal of Business Finance and Accounting, 15: 77-86.

74. Thompson, Rex, 1985, Conditioning the Return-Generating Process in Firm-Specific Events: A Discussion of Event Study Methods, Journal of Financial and Quantitative Analysis, 20: 151-68.

75. VanDerhei, Jack L., 1987, The Effect of Voluntary Termination of Overfunded Pension Plans on Shareholder Wealth, Journal of Risk and Insurance, 54: 131-56.

76. Watts, Ross L., 1973, The Information Content of Dividends, Journal of Business, 46: 191-21 1.

77. __ 1978, Systematic Abnormal' Returns After Quarterly Earnings Announcements, Journal of Financial Economics, 6: 127-50.

78. Winsen, Joseph K., 1977, A Reformation of the API Approach to Evaluating Accounting Income Numbers, Journal of Financial and Quantitative Analysis, 12: 499-504.

79. Zellner, Arnold, 1962, An Efficient Method of Estimating Seemingly Unrelated Regressions and Tests for Aggregation Bias, Journal of the American Statistical Association, 5: 348-68.

Glenn V. Henderson, Jr. is Briggs Swift Cunningham Professor of Finance at the University of Cincinnati. Appreciation is expressed to David Johnson and Rajiv Kalra for their assistance with the library research necessary for this article. The author is also grateful for the suggestions on earlier drafts received from David Johnson and Imre Karafiath. The Editor, S. Travis Pritchett, was especially helpful in improving the exposition in the article. An earlier version of this article was presented as a pedagogical seminar at the 1989 American Risk and Insurance Association meeting in Denver, Colorado in August 1989.

Event studies are a popular research paradigm in several fields of business research. They are now beginning to appear in insurance literature. This article enumerates the steps common to all event studies and discusses the guidance that has been provided by others for the problems encountered at each of those steps.

Introduction

The event study is one of the most popular statistical designs in finance. In 1987 and 1988, 14 event studies were published in the Journal of Finance and another 26 in the Journal of Financial Economics. During that period the Journal of Risk and Insurance published five event studies.

The event study will be used more frequently by insurance researchers as they become more familiar with its methodology and better recognize its applicability to insurance issues. The technique is particularly well suited to assessing the impact of insured or uninsured events on individual firms, or measuring the impact of market-wide events such as regulation or legislation on the market as a whole or on individual market (industry) segments.

Types of event studies vary. Market efficiency studies assess how quickly and correctly the market reacts to a particular type of new information. Information usefulness studies assess the degree to which company returns react to the release of a particular bit of news. Accounting scholars have used the information usefulness concept to assess the value of accounting information (see, e.g., Foster, 1973 and 1975, or Watts, 1973 and 1978). Such studies have also been used to assess the extent to which market participants were watching the accounting profession's policy making process (see, e.g., Basu, 1981, or Collins, Rozeff and Salatka, 1982).

Analogies to each of these topics can be found in recent Journal of Risk and Insurance articles. Sprecher and Pertl (1983) assess the impact that large losses had on shareholders' returns in a number of industries. Davidson, Chandy and Cross (1987) look at the same issue in the airline industry, where insurance is mandatory. Cross, Davidson and Thornton (1988) use the event study to examine the effect of director and officer lawsuits on firm value. Lacey (1988) examines the relationship between excess returns and property-liability firms' income to evaluate a collusion theory of the liability crisis. VanDerhei (1987) assesses the impact of voluntary termination of overfunded pension plans.

All these insurance studies follow a recent trend in event studies. They involve what Bowman (1983) calls metric explanation. In a metric explanation study, the event study is only the first step. Early studies explained the metrics (extra returns) by splitting the sample into different subsamples and examining whether the unusual element of returns differed among the subsamples. [1] More recent studies use excess returns as dependent variables in cross-sectional regressions to explain the source of the extra returns. [2]

These then are the basic types of event studies: market efficiency, information value, and metric explanation. These classifications are not mutually exclusive. It is quite common for event studies to combine a little of each. There are also methodology studies of the event study design, research that considers how best to run event studies. Event study methodology papers are unusual for business research where econometricians and statisticians typically use statistical theory to define how a test should be run. In event studies, the issues have been tested empirically, not theoretically, to find out what will work given the nature of financial data. Such investigations normally involve simulations. The researcher selects, or creates, [3] a hypothetical sample of securities, injects abnormal returns on arbitrarily defined event dates, and tests competing methodologies to ascertain which is better at detecting the event.

This article draws heavily upon the event study methodology literature to identify the problems that a researcher faces and to suggest the solutions that current knowledge provides for these problems. The presentation is as non-technical as is reasonable, with a minimum of symbolic notation. On each topic, interested readers are referred to applicable studies for greater detail.

First comes a general description of the event study design together with an outline of the research decisions, or problems. The balance of the article considers solutions to each of the problems, providing evidence and references for the suggested choices.

The Event Study Design

The earliest applications of the event study were by Fama, Fisher, Jensen and Roll [FFJR] (1969) and Ball and Brown (1968). FFJR can be characterized as an efficient market study, while Ball and Brown is an information usefulness study. FFJR investigated how quickly and correctly the market reacted to announcements of stock splits. Ball and Brown considered the value of companies' annual earnings announcements. After designating the information event of interest, both studies follow the event study process that has since become classic. The steps are as follows:

1 .Define the date upon which the market would have received the news.

2. Characterize the returns of the individual companies in the absence of this news.

3. Measure the difference between observed returns and "no-news" returns for each firm-the abnormal returns.

4. Aggregate the abnormal returns across firms and across time.

5. Statistically test the aggregated returns to determine whether the abnormal returns are significant and, if so, for how long.

Event Study Problems

The first, and potentially most important, choice an event researcher makes is what to study. Because event studies have become commonplace, the first hurdle an author faces with reviewers is convincing them of the merit of the project.

The event should be something of wide interest in the field. A good story is needed to explain anticipated market reaction to a particular bit of news. If the explanation of market reaction is obvious, time is of the essence. The researcher will be in a race to recognize the topic and get the research done and submitted. Even then there is the risk that reviewers will be unimpressed because of the obvious nature of the results.

Define Event Date

After defining the event, the researcher must determine when it took place. Timing of an event may seem obvious. It is not. The issue is not when an event occurred, but when the market, that is, its most interested and well informed segment, could have reasonably anticipated the news.

Characterize Normal Returns

Before discussing calculation of returns, some terms need defining. In event studies, it is important to distinguish between two periods. During the estimation period, estimates are derived. These estimates are used to define expected or normal returns for each firm during the event window. The event window is the event day plus and/or minus some number of days, weeks or months when the sample firms' returns are observed to see if anything unusual happened.

In event studies all time is kept relative to the event day. That day may or may not be the same calendar day for all firms. A study of stock splits, for example, would involve many different calendar days. By contrast, the study of a new regulation, or a policy change in accounting, might involve the same calendar event day for all firms. [4]

There are several approaches for characterizing a firm's normal returns. Some of the more popular include: mean returns, market returns, control portfolio returns, and risk-adjusted returns.

Mean Returns: In the mean return approach, a company is expected to generate the same return that it averaged during the estimation period.

Market Returns: In this approach, a company, in the absence of news, is expected to generate the same returns as the rest of the market during each day, week, or month of the event window.

Control Portfolio Returns: With control portfolios, the researcher selects a portfolio of companies that resemble the sample firms except for the absence of news about the firms in the control portfolio. The control firms might be in the same industry as the sample firms or they might be of the same risk (same beta). If controls are determined according to statistical criteria, such as beta, this similarity is assessed during some estimation period other than the event window. Abnormal return is the difference between the observed return of the sample firm and the return of the control portfolio for each day, week, or month during the event period.

Conditional, or Risk-adjusted Returns: This approach, which seems to be the current favorite, uses a regression model to predict expected returns for the firm. Abnormal returns, prediction errors, or residuals are defined to be the difference between the returns observed and those predicted by the regression model.

Calculate Excess Returns

Excess returns are the mathematical difference between observed returns and the returns predicted for that day, week, or month. When the company mean return is used as the prediction, these returns are referred to as mean-adjusted returns. When the return of the market is used as the "no-news" expectation, the difference is called a market-adjusted return. [5] When the regression model is used to predict a market-conditional return, the abnormal return may be given a variety of names. Some commonly encountered labels are abnormal returns, excess returns, prediction errors (PE), or, residuals.

Aggregate Excess Returns

Before testing, prediction errors (PE) must be aggregated both across firms and across time. Aggregation across firms generally involves a simple averaging of PEs for all firms in the sample on a given day, where days are counted in event time. Call this average, ARt, which is the average prediction error, or average residual, across all firms on day t, where t is measured relative to the event day; t = 0 is the event day; t - 1 is the day before, t + 1 is the day after.

Aggregation over time is most commonly a simple accumulation over so many event days. The cumulative abnormal return, CAR [.sub.t1,t2] is the sum of all of the ARs between t[.sub.1] and t[.sub.2].

Run Statistical Tests

The last step in the event study is to test aggregated returns statistically. Early event studies often used graphics as the primary method of interpretation. CAR plots were presented to show the reader how the market reacted to an event. Such pictures are still routine in event studies, but now they are expected to be supported with more rigorous statistical tests. This is an area where methodology studies provide helpful guidance.

Approaches to Event Study Problems

The problems in event studies cannot be solved as such. They can only be dealt with. The purpose of this discussion is to indicate what others have found to be important and what appears to work.

Event Dates

Brown and Warner [BW] (1980, 1985) conducted two landmarks simulation examinations of event study methodology. Their final conclusion was that even if a researcher doing an event study has a strong comparative advantage at improving existing methods, a good use of his time is still in reading old issues of the Wall Street Journal [WSJ] to more accurately determine event dates" (1980, p. 249).

BW emphasize the importance of carefully defining event dates because of the results of their power tests on uncertain event dates. The more days (or months, in the 1980 study) one must include in the event window, because of inability to pinpoint an event date, the lower the power of the event methodology (1980, pp. 224-27). Being able to use daily data and to specify the event day correctly serves to increase the statistical power of the event study technique (1985, pp. 13-14).

Misidentification of an event date can obscure an issue. Early merger studies used the date of a merger and found no significant evidence of shareholder return effects (see, e.g., Mandelker, 1974). When Asquith, Brunner and Mullins (1983) used the date on which the intent to merge was announced, however, they found significant ARs and CARS.

Using monthly returns, Pinches and Singleton (1978) and Griffin and Sanvicente (1982) found that bond rating changes convey little new information to the markets in that there is no noticeable reaction in the announcement month. When Glascock, Davidson and Henderson (1987) reexamined the issue using daily data, they found stronger (significant) announcement effects and some lag in adjustment to the new information.

The Glascock, Davidson and Henderson study provides another warning for those defining event dates. The WSJ does not publish all the news, and sometimes the news is out in other forms before it is published. Glascock, Davidson and Henderson, for example, found an average lag of three days between Moody's wire service rating change announcement and the time Moody's published the information. Only 34 percent of these bond reratings were published by the WSJ. Also, it is essential to find the earliest date of public disclosure when one is defining event dates. A WSJ index search is a start; it is not a finish.

Even though event date uncertainty will frequently be a problem, the event study design is still effective. Dyckman, Philbrick and Stephan (1984) found that testing accumulated excess returns over a slightly longer period allows a researcher to detect events without precisely pinpointing the timing of the event. [6]

Normal Returns

There are a number of issues, or problems, to deal with in calculating normal returns. One must decide how to measure returns and which approach to use to do so. If a regression approach is used, a particular model must be selected, the period over which the model is estimated must be selected, and there are potential econometric problems to deal with.

Return Calculation: Most event studies barely mention how they calculate returns. Beaver (1982) discusses the problem. Fama (1976, pp. 17-20) suggests that continuously compounded returns conform better to the normality assumptions underlying regression. Although they do not elaborate, BW (1985) indicate they got similar results with simple and continuously compounded returns. Thompson explicitly reports that "return form also does not seem to be an important consideration in event studies" (1988, p. 81).

Although it does not seem to make much difference, a large proportion of the event studies use continuously compounded returns. All that means is that returns are used in log form as follows:

R[.sub.jt] = 1n(1 + Return) (1)

where R[.sub.jt] = continuously compounded return on security j in period t.

Returns normally include both price changes and dividends so this equation can also be written:

R[.sub.jt] = 1n [(Price[.sub.t] + 1 + Dividend)/Price[.sub.t]] (1a) There are several reasons for using log transformed returns. Besides improving the normality of the return distribution, the transformation eliminates negative values and makes it easier to convert daily returns to weekly or monthly returns by taking a compound product, e.g., the return for a trading week is the product [Pi][.sub.t=1[.sup.5]] (1 + R[.sub.jt])[.sup.t], i.e., the product of five days' (1 + R[.sub.jt]) [assuming no holidays]. [7]

Excess Return Calculation

The basics of the mean-adjusted, market-adjusted, and control portfolio approaches have been discussed. Here we discuss only where these approaches might be useful and whether they work.

Mean-Adjusted Returns: Mean-adjusted returns are event period returns minus a constant, that constant being the average return for that firm during its estimation period. BW's simulations indicate that the technique works relatively well, producing results that are comparable to the more complicated regression models for both monthly (1980, p. 224) and daily returns (1985, p. 13). Dyckman, Philbrick and Stephan (1984, p. 18) ran multiple comparison tests and found that the mean-adjusted returns model did not work as well as a regression model, the market model.

Life would be easier if the mean-adjusted model could be used. Unfortunately, things are not that simple. BW find serious problems with the technique when a study involves calendar clustering, that is, when the event dates for all the firms are the same or fall close together. Klein and Rosenfeld (1987) recently found another problem. If many of the events occur during bull (bear) markets, this approach produces upward (downward) biased residuals. A similar problem occurs if an event frequently takes place after a company's stock has experienced a runup. [8] Then estimates of the firm's average return derived from a preceding estimation period would be upward biased and residuals consequently would be downward biased.

Market-Adjusted Returns: Market-adjusted returns are especially convenient. They involve no estimation process and no estimation period. To calculate market-adjusted returns, subtract the return on the market for the period from the return on the stock for the same period. This approach is helpful when there are no data prior to the event, as, for example, in the case of initial public offerings. [9] Further, BW (1980, p. 224; 1985, p. 13) suggest the power of the technique is comparable to that of the regression-based, market model approach. On the other hand, the Dyckman, Philbrick and Stephan (1984, p. 18) paired comparison test indicates that this model is less powerful than the market model. This approach also does not handle calendar clustering well.

Control Portfolio Returns: BW discuss a particular type of control portfolio where sample firms are assembled into unit-beta portfolios, i.e., portfolios with betas of one. Excess returns for the portfolio are then the return in excess of the market. BW (1980, pp. 223-49) report this technique to be less effective than the other three approaches because of the manner in which portfolio weights are defined (1980, pp. 238-39).

The control portfolio approach is not limited to matching betas to the market. It is easy to envision using industry indexes as proxies for the return on control portfolios. In that case excess returns could be measured as the difference between event company returns and an industry average excluding those firms with events. [10]

Given human capital investments in regression models, it is unlikely that control portfolios will gain popularity, but there is reason to consider this approach. Comparing to similar non-event companies controls for more than a statistical measure of risk, excess returns are easily interpreted, and definition of an estimation interval and the event day is less critical. The approach is convenient and does have merit, even though it is less frequently employed than the conditional, or risk-adjusted, returns model.

Conditional, or Risk-Adjusted, Returns: This approach dominates the event study literature. Some of the major issues to be resolved include: which regression model to use, how to define the predictor variables in the model selected, what to use as the estimation period, and which econometric problems to worry about.

Regression Models

Symbolic notation is the most efficient method of presenting the discussion. The more popular regression models include the following:

[Mathematical Expressions Omitted]

The single-index market model (SIMM):

[Mathematical Expressions Omitted]

All five regression models have appeared in the literature. There is no need to explain them all completely. Explanation of how each is applied, however, is appropriate before referring to Brenner (1979) to justify not belaboring them.

The SIMM is the simplest to use. Regress R[.sub.jt] on R[.sub.mt] during the estimation period to get [alpha[.sub.j]] and [Beta[.sub.j]]. Then during the event window predict R[.sub.jt] using the market returns during the window. Subtract predicted (or expected) E(R[.sub.jt]) from observed R[.sub.jt] to define excess returns. The excess returns model works the same way except that risk-free rates are subtracted from R[.sub.jt] and R[.sub.mt] before running the regressions.

The zero beta application is slightly different. Fama and MacBeth (1973) ran cross-sectional regressions to get estimates of [lambda[.sub.0t]] and [lambda[.sub.t]] for the period January 1935 to June 1968, which they have supplied to a number of researchers. [12] With the Fama-MacBeth approach, a researcher can allow betas to vary from one t to the next. Therefore, beta could be estimated with a time series process.

With the own variance model, beta is not the only measure of risk; variance is also considered to explain part of R[.sub.jt] Application of this approach is analogous to the zero beta model.

Fortunately for the purposes of this review, Brenner (1979) has examined these four approaches. He found that the simplest, the SIMM, did essentially as well as the more complicated approaches.

The last model presented, the multiple index model (6), is not a specific model but a generic description of regression models where researchers add indexes for influences other than the market. For example, Langetieg (1978) added an industry factor in his merger study, finding it important for performance measurement. Thompson (1988) evaluated the use of industry indexes in a BW type simulation study and found them not to increase the power of the event study.

Because of the interest rate sensitivity of bank stock returns, interest rate indexes are beginning to appear in event studies involving banks. [13] Given the nature of insurance company income, this may work in that industry as well. Whether this is appropriate is an empirical issue.

Measurement of Independent Variables: The SIMM is the most frequently used regression model. It has only one independent variable - R[.sub.mt]. The issue is whether this index should be calculated on a value- or equal-weighted basis. Theory says to use a value-weighted index because that most appropriately reflects total market performance. The arguments for a value-weighted index can be found in Roll (1981) and Ohlson and Rosenberg (1982). Unfortunately, using an equally weighted index is more likely to detect abnormal returns. Evidence is provided by BW (1980), and an explanation is provided by Peterson (1989).

Estimation Period: There are essentially four choices for the estimation period: before the event window, during the window, after the window, and around the window. The majority of events studies use an estimation period before the event. Appropriate disclaimers are included to assure the reader that the window was wide enough to avoid contaminating the regression. [14]

Market-adjusted returns essentially involve estimation during the window by assuming alpha is zero and beta is one. The control portfolio approach also involves estimation during the window, although construction of the control portfolio normally is based on data from before or after the event window.

A strong case can be made for a post-event estimation period if the event is expected fundamentally to alter the firms' sensitivity to market returns. If the event is important enough to change alpha and beta, then values from before the event are not appropriate. Evidence of alpha or beta changes may be used as evidence of an event.

The problem of alpha and beta shifts can be handled by using an estimation period around the window and testing for parameter shifts. The Gujarati (1970) technique is especially convenient for such tests, as follows:

[Mathematical Expression Omitted]

where everything is exactly the same as equation (2) except that the dummy variable, D, equals one after the event and zero before. If [Delta][alpha][.sub.j] and/or [Delta][Beta[.sub.j]] is statistically different from zero, there has been a parameter shift. Observations on R[.sub.jt] and R[.sub.mt] during the window are excluded. If there is evidence or reason to question parameter stability, then a post-event estimation period may be appropriate. 15 Arguments for such a process can be found in Stickel (1984).

Econometric Problems: Regression models are based on a number of statistical assumptions. Specifically, the models assume that the residuals: are normally distributed with a mean of zero, are not serially correlated, have a constant variance, and are not correlated with the explanatory variables. Further, in finance where several different regressions may be run for different securities, it is also assumed that there is no correlation between residuals for the different firms. [16]

In business studies, there is reason to be concerned about each of these assumptions. Security returns are not normally distributed, a problem that is worse in the case of the daily returns that are increasingly popular in event studies. There is evidence of slight serial correlation in security returns, which can be more pronounced in the residuals if observations are less than perfectly matched (nonsynchronous trading). There is evidence that variance shifts sometimes are associated with financial events. There is evidence that the residuals are correlated with values of the independent variable, R[.sub.mt]. When a study involves calendar clustering, there is evidence of contemporaneous covariance between residuals of different firms. Fortunately, the event study design appears to be robust to most of these problems, or techniques have been developed to handle them.

Nonnormality: The nonnormality problem is potentially more troublesome for studies using daily data. As reported by BW (1985, pp. 8-10) and replicated by Berry, Gallinger and Henderson (1990), daily returns are non-normal. Fortunately, the same is not true of the residuals. Either the distribution of the residuals is close enough to normal that such a null cannot be rejected (see Berry, Gallinger and Henderson, 1990), or the simulation results indicate that there is no gain in power by using distribution-free test statistics (BW, 1985, p. 25).

Serial Correlation and Nonsynchronous Trading: There is statistically significant autocorrelation in the residuals from SIMM regressions using daily data. It is possible that nonsynchronous trading and the induced bias in betas might be part of the problem (e.g., see BW, 1985, p. 19).

Nonsynchronous trading refers to the mismatching of the values for R[.sub.mt] and R[.sub.jt]. Normally, R[.sub.mt] would be an index created by averaging all R[.sub.jt] at their last trade price on day t. Some of the R[.sub.jt] would not have traded at or near the end of the trading day. Consequently, the individual R[.sub.jt] used in the SIMM regressions would in some cases involve observations that were not exactly end of day values. Further, the R[.sub.mt] is made up of R[.sub.jt], some of which are not end of day values. This induces a bias in the betas of individual securities. The result is that the betas of infrequently traded securities are downward biased, while shares trading with more than average frequency have upward biased betas.

Two widely recognized techniques have been suggested to correct for the bias. Scholes and Williams [SWI (1977) estimate lead, lagged, and contemporaneous betas separately and take a weighted average of these estimates. Dimson (1979) runs a one pass regression of R[.sub.jt] on R[.sub.mt], and a number of leading and lagging values of R[.sub.mt] and adds up the coefficients. Fowler and Rorke (1983) show the two approaches to be equivalent after making a minor correction in the Dimson procedure.

Unfortunately, the extra work does not seem to strengthen event study results. Reinganum (1982) and Theobald (1983) do not find the SW and Dimson betas significantly better than OLS estimates using daily data. The procedures do not eliminate the autocorrelation in event study residuals, [17] and they do not improve the power of event studies in simulation studies. [18] In their daily returns study, BW tried a simple autocorrelation adjustment procedure based on a revised test statistic, but it did not help much. [19]

In general, correction for autocorrelation of residuals appears unwarranted. Weak-form efficient market tests document the limited autocorrelation in security returns. Autocorrelation in the residuals is even smaller and appears to pose little problem for event studies.

Variance Shifts: Beaver (1968) and Patell and Wolfson (1979) provide evidence of variance shifts coincident with financial events. BW (1985, pp. 23-25) discuss the problems caused by variance shifts and suggest the use of a cross-sectional estimate of the variance following Penman (1982). The details are spelled out in their appendix (1985, p. 28).

As BW point out, there are costs to using such cross-sectional measures. Such a calculation implicitly assumes that the variance for every firm is the same on day t and the estimates ignore estimation period data. Consequently, BW suspect the procedure will be less powerful and their simulation results confirm their suspicions.

Collins and Dent (1984) examine potentially more powerful techniques that employ a generalized least squares (GLS) model and allow for variance shifts and cross-correlation among different firms' residuals. Their results are encouraging, although their models assume that the scale of the variance shift is estimable for all firms in the sample. Collins and Dent also evaluate techniques that handle correlation between residuals and the independent variable in event studies - the return on the market.

Correlation Between Residuals and R[.sub.mt]: If the type of event under study has a greater probability of occurring in a bull market than a bear market, it creates a problem. If expected residuals are based on an estimation interval where the market was not doing as well, the conditional expectation of R[.sub.jt] is misspecified, and that misspecification is induced into the excess return calculation.

Patell (1976) was the first to correct for this. He used a forecast error rather than a standard deviation to test whether the excess return was statistically significant. Further, he used standardized excess returns in the aggregation process. The Patell article has been heavily cited in the accounting literature. Dodd and Warner (1983) introduced these metrics in the finance literature, and their article is also widely cited as a precedent for the use of these test statistics. [20]

Collins and Dent (1984) test the Patell statistic (and other techniques). This simple modification performs well when there is no event clustering and there are relatively small shifts in residual variances. If these two problems are pronounced, more sophisticated estimation models are necessary to preserve statistical power. [21]

Event Clustering: Clustering is one of the more troubling problems in event studies. It occurs in a number of forms, and it is easy to envision insurance studies where all types of clustering would be encountered.

Calendar clustering refers to events occurring at or near the same time. Calendar clustering is also called event clustering. Both BW studies recognize the problems caused by calendar clustering, and researchers have since expended considerable effort trying to handle the problem. A lot of work has been done in accounting because so many accounting studies involve the calendar clustering problem. [22]

Industry clustering refers to events concentrated in the same industry. Both it and event clustering reduce power. [23]

Risk Clustering refers to events all occurring to companies with similar betas. With risk clustering, the conditional return models perform reasonably well. [24] Supposedly, abnormal performance is easier to detect in lower-risk (beta) companies. [25] Ability to detect abnormal performance in low-risk industries presumes events are diversified over market phase. During boom conditions, any abnormal performance in low market-sensitivity companies could be swamped by general market movements. [26]

Two different approaches have been used to handle the contemporaneous covariance, or event clustering, problem. Earlier approaches involved modification of the test statistics. Recently, emphasis has been on using regression models that incorporate estimates of the contemporaneous covariance in the estimation of the regression coefficients. These estimation procedures have been labeled differently, depending upon the author and the particular nuances the author is emphasizing. The approach has been called joint generalized least squares (GLS) by Collins and Dent (1984) and Collins, Rozeff and Salatka (1982), estimated GLS by Thompson (1985), and multivariate regression model (MVRM) by Binder (1985). The basic technique is an application of Zellner's (1962) seemingly unrelated regression, which was suggested for use in event studies by Gibbons (1980), although it had been used earlier in economics. [27]

Conceptually, the application is not too difficult. Rather than assume residuals across equations are independent, it is assumed that they are correlated and that the correlation process is stable. A set of first pass regressions are run and a variance-covariance structure is estimated across and between the system of equations. The variance-covariance matrix is then used in a transformation of the observation matrix to derive estimates of the regression parameters.

Researchers have gone a step further in the use of the technique. They have added dummy variables during the event period (or on individual event days) to estimate the event parameters. The resulting system of equations looks as follows:

[Mathematical Expressions Omitted]

One advantage of joint GLS is that forecast error estimates take into account contemporaneous covariance. Further, as Thompson (1985) explains, GLS allows joint tests, i.e., testing whether any one of the prediction errors ([gamma] in this model) is significant for the individual companies, not just for the portfolio of companies as a whole.

Binder (1985) provides a further simplification of the model. If one wants all sample firms to be accorded equal weight in the inferences, equation (8) can be collapsed to a portfolio model as follows:

[Mathematical Expression Omitted]

where all variables are exactly as in (8) except that they refer to the portfolio. The value [gamma][.sub.pi] is the same as the AR in the simpler event studies. The difference is that the forecast error of [gamma][.sub.pi] considers the contemporaneous covariance between [epsilon][.sub.jt]

It is possible that the researcher will want to give different weights to the various firms. One possibility is to weight firms with respect to the quality of the conditional return forecasts. More simply, run single equation regressions of the SIMM, then weight the companies in the portfolio according to the R[.sup.2] of these regressions. [28]

This is a relatively new line of thinking in event study research. How effective it will be is not yet clear. Collins and Dent (1984) find merit in the technique, especially when encountering significant cross-correlations and shifts in residual variance. On the other hand, Malatesta (1986, p. 38) reports that "there is no evidence that joint GLS provides a practical payoff in the typical event study context." [29]

Measuring Excess Returns

This is one area where the solution is relatively straightforward. The excess return in period t (day t for simplicity) is the difference between R[.sub.jt] (observed) and E(R[.sub.jt]) (predicted based on one of the regression models-or the firm's estimation period average or the concurrent market return in the two simpler cases).

Aggregating Excess Returns

Although there are probably others, there are three common methods of aggregation: cumulative abnormal return (CAR), abnormal performance index (API), and standardized cumulative prediction error (SCPE).

CAR and API both start by averaging prediction errors across firms on day t; again, ARt is the arithmetic average excess return, or prediction error, on day t. After that, CAR and API are cumulative sum and geometric product, respectively, as follows:

[Mathematical Expressions Omitted]

Ball and Brown (1968) used the API to assess the value of accounting income numbers. API[.sub.t,T] is the extra return one could make over the period from t to T by acting on the information at time t. The API is primarily an expository device. [10] It is not truly a test statistic, although Winsen (1977) suggests how to convert the API to a test statistic.

Because of its testability, the CAR is more widely employed. It is the measure used by FFJR, and it has withstood the test of time. Recent practice has been to standardize prediction errors before aggregation and accumulation. This process changes the meaning of the aggregated value and simultaneously creates a new set of test statistics.

Statistical Tests

There are two issues in statistical testing: (1) whether to use parametric or nonparametric tests, and (2) after making that decision, deciding which test statistics to employ.

Parametric Versus Nonparametric Tests: In their monthly study, BW report "the differences between the empirical frequency distribution of the test statistics and the t-distribution are not large. On the other hand, certain non-parametric tests used in event studies are not correctly specified" (1980, pp. 248-49).

Because of the nature of daily data, BW reconsidered the issue when examining daily data. They again report "methodologies based on the OLS market model and using standard parametric tests are well specified under a variety of conditions" (1985, p. 25). Dyckman, Philbrick and Stephan (1984, p. 29) say "the nonnormality of individual-security daily-return residuals has little effect on the inferences drawn from the use of the t-test applied to portfolios."

Berry, Gallinger and Henderson (1990) specifically examine the choice of parametric versus nonparametric tests with daily data. They also find parametric t-tests to work well. Further, the nonparametric tests do not. The empirical sampling distributions of these tests do not follow the theoretical distributions.

The guidance from these studies is clear. Nonparametric tests are an unnecessary complication and do not work well. The choice is the simple t test or, for aggregated excess returns, tests based on sums of t's or sums of squared t's.

The t-test can be found in any statistics book. Expressed in terms of an event study, for a single day, it would be:

Student t = (AR[.sub.t] - 0)/s[.sub.t] = Z (12)

where Z is the test statistic. The student t approaches a Z if the number of observations is large (which is generally the case in event studies). Besides, t is typically used as a time counter, and the use of Z reduces confusion.

The issues in event studies are how to calculate s[.sub.t] the estimated standard deviation, how to test aggregated ARs (or PEs), and whether to standardize before aggregation.

Continuing with the notation from equation (2), the errors from the estimation period are [epsilon][.sub.jt] where j is the company and t is the observation period (day), and t = 1, k (there are k days in the estimation period). The estimated standard deviation for one firm's one-day prediction error would be:

The variance for the CAR is found by summing the s[.sub.a[.sup.2]] and dividing by the number of companies. The standard deviation is the square root of the variance.

More recent event studies do not test CARS. Instead, ARs are standardized before they are aggregated, and the standardized aggregates form the basis of the test statistics. BW (1985, p. 28) illustrate the technique when discussing tests assuming cross-sectional dependence. What they refer to as the standardized excess return (SER) is calculated as:

SER[.sub.t] = AR[.sub.t]/s[.sub.at] (15)

where t denotes the sat s for a particular day. The test statistic for N companies on that day would be:

[Mathematical Expression Omitted]

This approach to aggregation appears to be the current trend, but the application includes one more improvement. The estimate for the standard deviation on a one-day return is no longer s[.sub.j]. The estimate (s[.sub.j]) assumes that variation in the market during the event period is essentially the same as it was during the estimation period. Further, this estimate does not adjust for the number of observations in the estimation interval. These added considerations were included in the standard error of the forecast (s[.sub.ft]), originally proposed by Patell (1976) and popularized in the finance literature by Dodd and Warner (1983). The two values, s[.sub.j] and s[.sub.ft], are related as follows: [32]

[Mathematical Expression Omitted]

The second term under the radical (1/k) adjusts for the length of the estimation interval-the greater k, the less the error in the out of sample forecast (assuming stability in the relationship). The last term accounts for the induced error if R[.sub.mt] varies significantly from what was observed in the estimation interval, where R[.sub.m] is the average value of R[.sub.mt] in that interval. [33]

The standardized prediction error (SPE) is the PE divided by s[.sub.ft], which makes the SPE approximately unit normal, N(0, 1):

SPE[.sub.jt] = PE[.sub.jt]/s[.sub.ft] (18)

The standardized cumulative prediction error for firm j is the sum of the SPE[.sub.j] between any two days of interest, adjusted for the number of days (m) being considered, starting at t[.sub.1] and ending at t[.sub.2] (subscripts t[.sub.1] and t[.sub.2] are dropped to simplify notation):

[Mathematical Expression Omitted]

The test statistic for N firms is the sum of the SCPE[.sub.j] divided by the square root of the number of firms:

[Mathematical Expression Omitted]

and Z is N(0, 1). This test statistic is proving quite popular, as examination of recent finance journals will document. Collins and Dent's (1984) simulation studies indicate that it works quite well in the absence of variance shifts or contemporaneous covariance. [34]

Examination of equation (18) suggests that this test statistic is essentially the same as a student t (or Z) test on a regression coefficient. Such is the case, which provides the transition to an even more convenient form of event study. Adding dummy variables to the market model creates what has been labeled the event parameter model as follows:

[Mathematical Expression Omitted]

where there are (t[.sub.2] - t[.sub.1]) dummy variables each taking a value of one on one event period day and zero otherwise. In this model, [lambda][.sub.jt] is equal to PE[.sub.jt], and its t (or Z) value from the regression printout is the SPE[.sub.jt]. [35]

If contemporaneous covariance is suspected to be a problem, the analysis is amenable to being run using seemingly unrelated regression. As a practical matter, with a large sample size this requires considerable computer capacity. However, if the signs on [epsilon][.sub.jt] are expected to be the same for all j, or the sample can be partitioned to where this is a reasonable approximation, equation (21) can be run as a single portfolio [36] in the manner suggested in equation (9).

Using an equally weighted portfolio is equivalent to testing ARs and CARS, equally weighted averages. Sefcik and Thompson (1986) examine other weighting schemes.

These newer econometric approaches do not change the logic of the event study. The ability to run the analysis as a more sophisticated regression allows the researcher to control for a greater variety of statistical problems. Such advances may reduce the problems of industry and calendar clustering and residual variance shifts. [37] The staying power of event study research is at least partially attributable to its ability to examine complex issues in complex settings and still provide simple, intuitively appealing results. Greater precision in testing can only enhance this attribute.

Summary and Conclusion

The sheer volume of event study literature can be imposing to researchers first considering use of the paradigm. Yet an examination of the process reveals that the similarities between various event studies are greater than the differences. All event studies follow a well-defined series of steps. For each of these steps, there is existing literature to suggest how to handle the choices involved.

The event study is a classic design. Classic designs are simple and elegant, and, above all else, functional. The event study has become a classic because it works. It can be used under less than perfect conditions and still produce reliable results.

Event study methodology research provides one consistent lesson: even the simplest versions of the event study design work. The more specialized designs may be necessary for troublesome situations, but for most applications the simpler versions do nicely.

The event study is a popular chronological frame of reference for scholarly evaluation of financial events. What is done with the information it provides it what is important.

Use of reliable shared benchmarks allows us to communicate our findings with less concern about the language of our science and reduces duplication of effort. The event study is a very serviceable design. It is easy to learn to use, it is reliable, and it is easy to interpret. These attributes should keep it from going out of style any time soon.

1 Davidson, Chandy and Cross (1987, pp. 167-69) use this approach.

2 Cross, Davidson and Thornton (1988), Lacey (1988), and VanDerhei (1987) all do this.

3 A study using security returns created using a random number generator and empirically based parameter estimates is referred to as a Monte Carlo simulation.

4 In such a case, there are additional econometric problems. Simpler event study designs depend upon calendar diversification to eliminate across-sample residual correlations.

5 Davidson, Chandy and Cross (1987, p. 167) call this the average return model.

6 Dyckman, Philbrick and Stephan (1984, pp. 16-18) also examine whether using longer return differencing helps, i.e., whether using two- or three-day returns rather than one-day is more effective. Using the single day procedure is just as effective.

7 This can be especially helpful in studying intervalling effects in event studies. See Dowen and Isberg (1988).

8 Evidence suggests some financial events do frequently follow stock runups. Mandelker (1974) suggests that mergers frequently follow runups, although later studies cast some doubt on this. Dann and Mikkelson (1984), Asquith and Mullins (1986), and Mikkelson and Partch (1986) provide evidence showing that new security offerings often follow runups.

9 See Chalk and Peavy (1987).

10 Gonedes, Dopuch, and Penman (1976, p. 113) discuss using control portfolios to control for factors other than beta.

11 From the form of equation (3) it is possible to see that the CAPM is reconcilable with the SIMM if betas and interest rates are relatively stable. Equation (3) was used in Brenner's (1979) evaluation.

12 One could estimate such parameters for later years following their approach, although the process is quite computer-intensive.

13 Stone (1974) suggests a two-factor model that was tested by Lloyd and Shick (1977). Booth, Officer and Henderson (1985) show bank betas to be interest sensitive. Flannery and James (1984) find interest sensitivity to be priced in bank returns, and Sweeney and Warga (1986) report that this is also true for utilities.

14 The mean-adjusted returns model normally is based on a before the event estimation period the mean referred to is the company's average return before the event.

15 Cross, Davidson and Thornton (1988) use a post-estimation process as validation for their results from a pre-event estimation period.

16 See Fogler and Ganapathy (1982) for a review of econometrics from a specifically finance perspective.

17 See BW (1985, p. 19) and Berry, Gallinger and Henderson (1987, p. 55).

18 See BW (1985, pp. 16-17), Dyckman, Philbrick and Stephan (1984), and Atchison (1986).

19 BW (1985, p. 20).

20 From 1976 through 1988, the Patell article has been cited 94 times. Through 1988, Dodd and Warner have been cited 63 times. These figures are based on the Social Science Citation Index.

21 Collins and Dent (1984) compare the efficacy of Jaffe's (1974) test statistic, which considers contemporaneous covariances, and their own GLS measure, which allows for both contemporaneous covariance and shifts in residual variance.

22 See Dyckman, Philbrick and Stephan (1984, p. 2) for elaboration.

23 See Dyckman, Philbrick and Stephan (1984).

24 BW (1980, p. 238) reported that even the mean-adjusted returns model did well with monthly data; the control portfolio technique did not.

25 See, e.g., BW (1980, p. 236) and Dyckman, Philbrick and Stephan (1984, p. 25).

26 To the extent that events are calendar, industry, risk and market-condition clustered, the market-adjusted returns approach could have trouble isolating excess returns.

27 The SUR analysis process is explained in Theil (1971).

28 Sefcik and Thompson (1986) discuss the nuances of portfolio weighting schemes and the details of constructing portfolios where cross-correlations are also taken into account.

29 Malatesta's analysis is an assessment of GLS but not of GLS with event clustering. Malatesta's event days are not systematically clustered.

30 See Marshall (1975).

31 This general approach is suggested by BW (1980, p. 250), extended by Jaffe (1974), and tested by Collins and Dent (1984).

32 Mikkelson and Partch (1988) provide a mathematically equivalent but slightly more convenient calculation for s[.sub.ft].

33 If contemporaneous covariance is a problem, s,, can be estimated cross-sectionally and the proposed refinements in s, still hold.

34 Karafiath and Spencer (1989) report that the Dodd and Warner (or Patell) statistic is biased toward rejection of the null of no significant excess return if the event window is long relative to the estimation period. The cause of the bias is the cross-correlation among the residuals, which this test statistic assumes to be zero. The Patell chi square statistic based on the squared t (or Z) values does not seem to suffer the same problem.

35 See Thompson (1985) and Karafiath (1988) for elaboration.

36 See Binder (1985), for example.

References

1. Asquith, Paul, Robert F. Brunner and David W. Mullins, Jr., 1983, The Gains to Bidding Firms from Mergers, Journal of Financial Economics, 11: 121-39.

2. Asquith, Paul and David W. Mullins, Jr., 1986, Equity Issues and Offering Dilution, Journal of Financial Economics, 15: 61-90.

3. Atchison, Michael D., 1986, Non-Representative Trading Frequencies and the Detection of Abnormal Performance, Journal of Financial Research, 9: 343-48.

4. Ball, Ray and Phillip Brown, 1968, An Empirical Evaluation of Accounting Income Numbers, Journal of Accounting Research, 6: 159-78.

5. Basu, Sanjoy, 1981, Market Reactions to Accounting Policy Deliberations: The Inflation Accounting Case Revisited, The Accounting Review, 56: 942-54.

6. Beaver, William H., 1968, The Information Content of Annual Earnings Announcements, Journal of Accounting Research, Supplement, 6: 67-92.

7. Beaver, William H., 1982, Discussion of Market-Based Empirical Research in Accounting: A Review, Journal of Accounting Research, Supplement, 20: 323-31.

8. Berry, Michael A., George W. Gallinger and Glenn V. Henderson, Jr., 1990, Using Daily Stock Returns in Event Studies and the Choice of Parametric versus Non-Parametric Test Statistics, Quarterly Journal of Business and Economics, 29: 70-85.

9. Binder, John J., 1985a, On the Use of the Multivariate Regression Model in Event Studies, Journal of Accounting Research, 23: 370-83.

10. __ 1985b, Measuring the Effects of Regulation with Stock Price Data, Rand Journal of Economics, 16: 167-83.

11. Black, Fisher, 1972, Capital Market Equilibrium with Restricted Borrowing, Journal of Business, 45: 444-55.

12. Booth, James R., Dennis T. Officer and Glenn V. Henderson, Jr., 1985, Commercial Bank Stocks, Interest Rates and Systematic Risk, Journal of Economics and Business, 37: 303-10.

13. Bowman, Robert G., 1983, Understanding and Conducting Event Studies, Journal of Business Finance and Accounting, 10: 561-84.

14. Brenner, Menachem, 1977, The Effect of Model Misspecification on Tests of the Efficient Market Hypotheses, Journal of Finance, 32: 57-66.

15. __ 1979, The Sensitivity of the Efficient Market Hypothesis to Alternative Specifications of the Market Model, Journal of Finance, 34: 915-29.

16. Brown, Stephen J. and Jerold B. Warner, 1980, Measuring Security Price Performance, Journal of Financial Economics, 8: 205-58.

17. __ 1985, Using Daily Stock Returns: The Case of Event Studies, Journal of Financial Economics, 14: 3-32.

18. Burgstahler, David and Eric W. Noreen, 1986, Detecting Contemporaneous Market Reactions to a Sequence of Related Events, Journal of Accounting Research, 24: 170-86.

19. Chalk, Andrew J., and John W. Peavy, Ill, 1987, Initial Public Offerings: Daily Returns, Offering Types and the Price Effect, Financial Analysts Journal, 43: 65-9.

20. Chu, Chen-Chin, Edward L. Bubnys and C.F. Lee, 1987, Estimates of the Expected Market Risk Premium: Empirical Analysis, paper presented at the Joint National Meeting of ORSA/TIMS.

21. Collins, Daniel W. and Warren T. Dent, 1984, A Comparison of Alternative Testing Methodologies Use in Capital Market Research, Journal of Accounting Research, 22: 48-84.

22. Collins, Daniel W., Michael S. Rozeff and William K. Salatka, 1982, The SEC's Rejection of SFAS No. 19: Tests of Market Price Reversal, The Accounting Review, 57: 1-17.

23. Cross, Mark L., Wallace N. Davidson, Ill and John H. Thornton, 1988, Taxes, Stock Returns and Captive Insurance Subsidiaries, Journal of Risk and Insurance, 55: 331-38.

24. Dann, Larry Y. and Wayne H. Mikkelson, 1984, Convertible Debt Issuance, Capital Structure Change and Financing-Related Information, Journal of Financial Economics, 13: 157-86.

25. Davidson, Wallace N., III, P.R. Chandy and Mark Cross, 1987, Large Losses, Risk Management and Stock Returns in the Airline Industry, Journal of Risk and Insurance, 54: 163-72.

26. Dimson, Elroy, 1979, Risk Measurement When Shares are Subject to Infrequent Trading, Journal of Financial Economics, 7: 197-226.

27. Dodd, Peter and Jerold B. Warner, 1983, On Corporate Governance, Journal of Financial Economics, 11: 401-38.

28. Dowen, Robert J. and Steven C. Isberg, 1988, Re-examination of the Intervalling Effect on the CAPM Using a Residual Return Approach, Quarterly Journal of Business and Economics, 27: 114-29.

29. Dyckman, Thomas, Donna Philbrick and Jens Stephan, 1984, A Comparison of Event Study Methodologies Using Daily Stock Returns: A Simulation Approach, Journal of Accounting Research, 22: 1-33.

30. Fama, Eugene F., 1976, Foundations of Finance (New York, Basic Books).

31. Fama, Eugene F., Lawrence Fisher, Michael Jensen and Richard Roll, 1969, The Adjustment of Stock Prices to New Information, International Economic Review, 10: 1-21.

32. Fama, Eugene F. and James D. MacBeth, 1973, Risk, Return and Equilibrium: Empirical Tests, Journal of Political Economy, 71: 607-636.

33. Flannery, Mark J. and Christopher M. James, 1984, The Effect of Interest Rate Changes on the Common Stocks of Financial Institutions, Journal of Finance, 39: 1141-53.

34. Fogler, H. Russell and Sundaram Ganapathy, 1982, Financial Econometrics (Englewood Cliffs, NJ, Prentice-Hall, Inc.).

35. Foster, George, 1973, Stock Market Reaction to Estimates of Earnings Per Share by Company Officials, Journal of Accounting Research, 11: 25-37.

36. __ 1975, Security Price Revaluation Implications of Sub-Earnings Disclosure, Journal of Accounting Research, 13: 283-92.

37. Fowler, David J. and C. Harvey Rorke, 1983, Risk Measurement When Shares are Subject to Infrequent Trading: Comment, Journal of Financial Economics, 12: 279-83.

38. Gibbons, Michael R., 1980, Econometric Models for Testing a Class of Financial Models-An Application of the Nonlinear Multivariate Regression Model, Ph.D. dissertation, University of Chicago.

39. Glascock, John L., Wallace N. Davidson, Ill and Glenn V. Henderson, Jr., 1987, Announcement Effects of Moody's Bond Rating Changes on Equity Returns, Quarterly Journal of Business and Economics, 26: 67-78.

40. Gonedes, Nicholas J., Nicholas Dopuch and Stephen J. Penman, 1976, Disclosure Rules, Information Production, and Capital Market Equilibrium: The Case of Forecast Disclosure Rules, Journal of Accounting Research, 14: 89-137.

41. Griffin, Paul A. and Antonio Z. Sanvicente, 1982, Common Stock Returns and Rating Changes: A Methodological Comparison, Journal of Finance, 37: 103-19.

42. Gujarati, D., 1970, Use of Dummy Variables in Testing for Equality Between Sets of Coefficients in Linear Regression, American Statistician, 24: 18-21.

43. Jaffe, Jeffrey F., 1974a, The Effect of Regulation Changes on Insider Trading, The Bell Journal of Economics and Management Science, 93-121.

44. __ 1974b, Special Information and Insider Trading, Journal of Business, 47: 410-28.

45. Karafiath, Imre, 1988, Using Dummy Variables in the Event Methodology, Financial Review, 23: 351-57.

46. Karafiath, Imre and David E. Spencer, 1989, Statistical Inference in Multi-Period Event Studies: An Assessment, working paper, authors currently at University of North Texas and Massachusetts Institute of Technology (visiting scholar), respectively.

47. Klein, April and James Rosenfeld, 1987, The Influence of Market Conditions on Event Study Residuals, Journal of Financial and Quantitative Analysis, 22: 345-51.

48. Lacey, Nelson J., 1988, Recent Evidence on the Liability Crisis, Journal of Risk and Insurance, 55: 499-508.

49. Langetieg, Terrence C., 1978, An Application of a Three-Factor Performance Index to Measure Stockholders' Gains from Merger, Journal of Financial Economics, 6: 365-83.

50. Lloyd, William P. and Richard A. Shick, 1977, A Test of Stone's Two-Index Model of Returns, Journal of Financial and Quantitative Analysis, 12: 363-76.

51. Malatesta, Paul H., 1986, Measuring Abnormal Performance: The Event Parameter Approach Using Joint Generalized Least Squares, Journal of Financial and Quantitative Analysis, 21: 27-38.

52. __and Rex Thompson, 1985, Partially Anticipated Events, Journal of Financial Economics, 14: 237-50.

53. Mandelker, Gershon, 1974, Risk and Return: The Case of Merging Firms, Journal of Financial Economics, 4: 303-35.

54. Marshall, Ronald M., 1975, Interpreting the API, The Accounting Review, 50: 99-111.

55. Mikkelson, Wayne H. and M. Megan Partch, 1986, Valuation Effects of Security Offerings and the Issuance Process, Journal of Financial Economics, 15: 31-59.

56. Ohlson, James A. and Barr Rosenberg, 1982, Systematic Risk of the CRSP Equal-Weighted Common Stock Index: A History Estimated by Stochastic-Parameter Regression, Journal of Business, 55: 121-45.

57. Patell, James M., 1976, Corporate Forecasts of Earnings Per Share and Stock Price Behavior: Empirical Tests, Journal of Accounting Research, 14: 246-76.

58. __ 1979, The API and the Design of Experiments, Journal of Accounting Research, 17: 528-49.

59. Patell, James M. and Mark A. Wolfson, 1979, Anticipated Information Releases Reflected in Call Option Prices, Journal of Accounting and Economics, 1: 117-40.

60. Penman, Stephen H., 1982, Insider Trading and the Dissemination of Firms' Forecast Information, Journal of Business, 55: 479-503.

61. Peterson, Pamela P., 1989, Event Studies: A Review of Issues and Methodology, Quarter Journal of Business and Economics, 28: 36-66.

62. Pinches, George E. and J. Clay Singleton, 1978, The Adjustment of Stock Prices to Bond Rating Changes, Journal of Finance, 33: 29-44.

63. Reinganum, Mark R., 1982, A Direct Test of Roll's Conjecture on the Firm Size Effect, Journal of Finance, 37: 27-35.

64. Roll, Richard, 1981, A Possible Explanation of the Small Firm Effect, Journal of Finance, 36: 879-88.

65. Scholes, Myron and Joseph Williams, 1977, Estimating Betas from Nonsynchronous Data, Journal of Financial Economics, 5: 309-28.

66. Sefcik, Stephan E. and Rex Thompson, 1986, An Approach to Statistical Inference in Cross-Sectional Models with Security Abnormal Returns as Dependent Variable, Journal of Accounting Research, 24: 316-34.

67. Sprecher, C. Ronald and Mars A. Pertl, 1983, Large Losses, Risk Management and Stock Prices, Journal of Risk and Insurance, 50: 107-117.

68. Stickel, Scott E., 1984, The Effect of Preferred Stock Rating Changes on Preferred and Common Stock Prices, working paper, University of Chicago.

69. Stone, Bernell K., 1974, Systematic Interest Rate Risk in a Two-Index Model of Returns, Journal of Financial and Quantitative Analysis, 9: 709-21.

70. Sweeney, Richard James and Arthur D. Warga, 1986, The Pricing of Interest-rate Risk: Evidence from the Stock Market, Journal of Finance, 41: 393-410.

71. Theil, H., 1971, Principles of Econometrics (New York, John Wiley and Sons).

72. Theobald, Michael, 1983, The Analytic Relationship Between Intervalling and Nontrading in Continuous Time, Journal of Financial and Quantitative Analysis, 18: 199-209.

73. Thompson, Joel E., 1988, More Methods that Make Little Difference in Event Studies, Journal of Business Finance and Accounting, 15: 77-86.

74. Thompson, Rex, 1985, Conditioning the Return-Generating Process in Firm-Specific Events: A Discussion of Event Study Methods, Journal of Financial and Quantitative Analysis, 20: 151-68.

75. VanDerhei, Jack L., 1987, The Effect of Voluntary Termination of Overfunded Pension Plans on Shareholder Wealth, Journal of Risk and Insurance, 54: 131-56.

76. Watts, Ross L., 1973, The Information Content of Dividends, Journal of Business, 46: 191-21 1.

77. __ 1978, Systematic Abnormal' Returns After Quarterly Earnings Announcements, Journal of Financial Economics, 6: 127-50.

78. Winsen, Joseph K., 1977, A Reformation of the API Approach to Evaluating Accounting Income Numbers, Journal of Financial and Quantitative Analysis, 12: 499-504.

79. Zellner, Arnold, 1962, An Efficient Method of Estimating Seemingly Unrelated Regressions and Tests for Aggregation Bias, Journal of the American Statistical Association, 5: 348-68.

Glenn V. Henderson, Jr. is Briggs Swift Cunningham Professor of Finance at the University of Cincinnati. Appreciation is expressed to David Johnson and Rajiv Kalra for their assistance with the library research necessary for this article. The author is also grateful for the suggestions on earlier drafts received from David Johnson and Imre Karafiath. The Editor, S. Travis Pritchett, was especially helpful in improving the exposition in the article. An earlier version of this article was presented as a pedagogical seminar at the 1989 American Risk and Insurance Association meeting in Denver, Colorado in August 1989.

Printer friendly Cite/link Email Feedback | |

Author: | Henderson, Glenn V., Jr. |
---|---|

Publication: | Journal of Risk and Insurance |

Date: | Jun 1, 1990 |

Words: | 9904 |

Previous Article: | Leading contributors to insurance research. |

Next Article: | Karl Borch's research contributions to insurance. |

Topics: |