Printer Friendly

Presidential election polls in 2000: a study in dynamics.

The study of voters and elections has taught us a lot about individuals' vote choices and election outcomes themselves. We know that voters behave in fairly understandable ways on election day (see, e.g., Alvarez 1997; Campbell 2000; Campbell et al. 1960; Gelman and King 1993; Johnston et al. 1992; Lazarsfeld, Berelson, and Gaudet 1944; Lewis-Beck 1988). We also know that the actual outcomes are fairly predictable (see, e.g., Campbell and Garand 2000). Of course, what we do know is imperfect. (1) Even to the extent we can predict what voters and electorates do at the very end, we know relatively little about how voter preferences evolve to that point. How does the outcome come into focus as the election campaign unfolds? Put differently, how does the campaign bring the fundamentals of the election to the voters?

Previous research suggests that preferences evolve in a fairly patterned and understandable way (Campbell 2000; Wlezien and Erikson 2002). This research focuses on the relationship between election results for a set of years and trial-heat poll readings at varying points in time during the election cycle, mostly for presidential elections in the United States. (2) What it shows is that the predictability of outcomes increases in proportion to the closeness of the polling date to election day. The closer we are to the end of the race, the more the polls tell us about the ultimate outcome. Although this may not be surprising, it is important: the basic pattern implies that electoral sentiment crystallizes over the course of election campaigns.

The previous research takes us only part of the way. That is, it does not explicitly address dynamics. This is quite understandable; after all, we lack anything approaching a daily time series of candidate preferences until only the most recent elections. In this context, the U.S. presidential race in 2000 offers us a fairly unique opportunity. The volume of available data for this election allows us to directly observe the dynamics of voter preferences for much of the election cycle. We cannot generalize with but a single series of polls. We nevertheless can explore at much greater depth than has been possible in the past.

The analysis in this article attempts to answer two specific questions. First, to what extent does the observable variation in poll results reflect real change in electoral preferences as opposed to survey error? Second, to the extent poll results reflect real change in preferences, did this change in preferences actually last or else decay? Answers to these questions tell us a lot about the evolution of electoral sentiment during the 2000 presidential race. They also tell us something about the effects of the election campaign itself. Now, let us see what we can glean from the data.

The Polls

For the 2000 election year itself, the pollingreport.com Web site contains some 524 national polls of the Bush-Gore(-Nader) division reported by different survey organizations. In each of the polls, respondents were asked about how they would vote "if the election were held today," with slight differences in question wording. Where multiple results for different universes were reported for the same polling organizations and dates, data for the universe that best approximates the actual voting electorate are used, for example, a sample of likely voters over a sample of registered voters. Most important, all overlap in the polls-typically tracking polls-conducted by the same survey houses for the same reporting organizations is removed. For example, where a survey house operates a tracking poll and reports three-day moving averages, we only use poll results for every third day. This leaves 295 separate national polls. Wherever possible, respondents who were undecided but leaned toward one of the candidates were included in the tallies.

Figure 1 displays results for the complete set of polls. Specifically, it shows Gore's percentage share of the two-party vote (ignoring Nader and Buchanan) for each poll. Since most polls are conducted over multiple days, each poll is dated by the middle day of the period the survey is in the field. (3) The 295 polls allow readings for 173 separate days during 2000, 59 of which are after Labor Day, which permits a virtual day-to-day monitoring of preferences during the general election campaign. However, it is important to note that polls on successive days are not truly independent. Although they do not share respondents, they do share overlapping polling periods. Thus, polls on neighboring days will capture a lot of the same things by definition. This is of consequence for our analysis of dynamics.

[FIGURE 1 OMITTED]

The data in Figure 1 indicate some patterned movement in the polls over time. For any given date, the poll results nevertheless differ quite considerably. Some of the noise is mere sampling error. There are other sources of survey error, as we will see. The daily poll-of-polls in Figure 2 reveals a more distinct pattern. The observations in the figure represent Gore's share for all respondents aggregated by the middate of the reported polling period. We see more clearly that Gore began the year well behind Bush and gained through the spring, where his support settled at around 47 percent until the conventions. We then see the (fairly) predictable convention bounces, out of which Gore emerged in the lead heading into the autumn. Things were playing out as political science election forecasters might have expected, and much like 1988 (see Wlezien 2001). The sitting vice president is running, the economy and presidential approval are favorable, he is behind in the polls early in the year, and then he gains the lead for good after the party convention. But then the parallel with 1988 stops, and Gore's support declined fairly continuously until just before election day, when it rebounded sharply. The polls in the field at the very end of the campaign indicated a dead heat.

[FIGURE 2 OMITTED]

Survey Error and the Polls

Trial-heat poll results represent a combination of true preferences and survey error. Survey error comes in many forms, the most basic of which is sampling error. All polls contain some degree of sampling error. Thus, even when the division of candidate preferences does not change, we will observe changes from poll to poll. This is well known. All survey results also contain design effects, the consequences of the departure in practice from simple random sampling that results from clustering, stratifying, and the like (Groves 1989). When studying election polls, the main source of design effects relates to the polling universe. It is not easy to determine who will vote on election day: when we draw our samples, all we can do is estimate the voting population. Survey organizations typically rely on likely voter screens. In addition, most organizations use some sort of weighting procedure, for example, weighting by a selected distribution of party identification or some other variable that tends to predict the election day vote. How organizations screen and weight has important consequences both for the cross-sectional poll margins at each point in time and for the variance in the polls over time (Wlezien and Erikson 2001).

Given that we are combining polls from various survey organizations, house effects are another source of error. Different houses employ different methodologies, and these can affect poll results. Much of the observed difference in results across survey houses may reflect differences in screening and weighting practices noted above. Results also can differ across houses due to data collection mode, interviewer training, procedures for coping with refusals, and the like (see, e.g., Converse and Traugott 1986; Lau 1994; also see Crespi 1988). As with design effects, poll results will vary from day to day because the polls reported on different days are conducted by different houses.

Now, we cannot perfectly correct for survey error. We cannot eliminate sampling error. We also cannot perfectly correct for design and house effects, as we have only limited information about what survey organizations actually do. We can to some extent account for these effects, however. That is, we can control for the polling universe--at least broadly defined--as well as the different survey houses. The data include poll results from thirty-six different survey organizations. These organizations sampled three different polling universes during the year, specifically, adults, registered voters, and likely voters. (Of course, as noted above, these universes do not necessarily mean the same things to different organizations, particularly as relates to "likely" voters.) To adjust for possible design and house effects, the results for all 295 polls are regressed on a set of thirty-five (N - 1) survey house dummy variables and two polling universe dummy variables. Dummy variables also were included for each of the 173 dates with at least one survey observation. With no intercept, the coefficients for the survey dates constitute estimates of public preferences over the course of the campaign.

The results of the analysis are shown in Table 1. Here, we can see that the general polling universe did not meaningfully affect poll results during 2000, controlling for survey house and date. This is not entirely surprising, as the same was true in 1996 (see Erikson and Wlezien 1999). Table 1 also shows that survey house did matter in 2000. For the full election year, the range of the house effects is more than six percentage points (also see Traugott 2001). After Labor Day, the range of effects is even greater, approximately eight percentage points. These are big differences and ones that are difficult to fully explain given the available information about the practices of different survey houses. There is reason to suppose that the observed differences across houses largely reflect underlying differences in design (Wlezien and Erikson 2001), though we cannot be sure. Whatever the source, the differences have consequences for our portrait of preferences during 2000, as poll results will differ from day to day merely because different houses report on different days.

Also, notice in Table 1 that survey date effects easily meet conventional levels of statistical significance (p <.001) during both the full election year and the general election campaign after Labor Day. The result implies that underlying electoral preferences changed over the course of the campaign. This is of obvious importance. Figure 3 displays the survey date estimates from the regression in Table 1. Because of substantial house effects, it was necessary to recenter the estimates, and the median house from the analysis of variance was used, namely, ABC. (4) These readings exhibit more pattern than the unadjusted polls in Figure 2, especially late in the cycle. Figure 4 zooms in on the post-Labor Day period. Here, we can see that there still is some evidence of noise in these poll estimates, partly the result of sampling error. There also is reason to think that the series contains design and house effects that are not easily captured statistically, that is, because they vary over time. The problem is that we do not know. We thus must accept that our series of polls, even adjusted for systematic house and design effects, is imperfect. Still, these data can tell us quite a lot, as we will see.

[FIGURES 3-4 OMITTED]

An Analysis of Poll Variance

Our adjusted poll estimates contain random sampling error. We cannot simply separate this sampling error from reported preferences. We nevertheless can ask, What portion of the remaining variance is real? Assuming random sampling or its equivalent by pollsters, the answer is relatively easy to compute using the frequencies and sample sizes of the actual polls (Heise 1969). That is, we can determine the observed variance of poll results where underlying preferences do not change. (5)

For each daily poll-of-polls, the estimated error variance is P(1-p) / N , where p = the proportion who prefer the Democratic candidate rather than the Republican, and n = the number of respondents offering preferences for either candidate. For the value of p, we simply insert the observed Democratic proportion of major party preferences in the daily poll reading. For n, we take the number of respondents offering Democratic or Republican preferences on that day. Simple calculations give the estimated error variance for each daily poll-of-polls. For example, where preferences are divided fifty-fifty in a sample of 1,000, the estimated error variance is 0.00025, or 2.5 when expressed in terms of percentage points (50x50/1000). The error variance for all polls-of-polls is simply the mean error variance over the campaign. The estimated true variance is the arithmetic difference between the variance we observe from the poll readings and the estimated error variance. The ratio between the estimated true variance and the observed variance is the statistical reliability.

The results of conducting such an analysis using our series of adjusted readings for the 2000 presidential election are shown in Table 2A. The first row of the table contains results for the full election year. Specifically, it shows the average daily variance of our adjusted series of polls along with the estimated error variance, the resulting "true" variance, and the corresponding reliability statistic. Of greatest importance is that most (77 percent) of the variance we observe over the election year is real, not the mere result of sampling error. The estimated real variance in preferences over the election year is just about 8 percentage points. The standard deviation is about 2.8 percentage points, which implies an average daily confidence interval of plus-or-minus 5.6 percentage points around the observed percentage for Gore. The estimated range of real movement thus was about 11 points.

In the second row of Table 2A, we can see that the estimates for the last sixty days of the campaign are smaller but not by a lot. Although the observed poll variance during the period is only 6.50 percentage points, the estimated error variance also is relatively small because the ns increase markedly late in the campaign. Just about the same portion (75 percent) of the observed variance is real, or at least appears to be. The estimated true variance is 4.89 percentage points, which implies a range of real movement of almost 9 percentage points during the autumn. This is almost twice what we observed for the same period in 1996 (see Erikson and Wlezien 1999). The number also is quite large by longer historical standards. As is clear in Table 2B, over the fourteen presidential elections between 1944 and 1996, the mean estimated true variance during the last sixty days of the cycle is only 2.45 percentage points. The median is 1.74 points. (For specifics, see Wlezien and Erikson 2002.) Based on this analysis, then, it appears that campaign events not only had real effects on preferences in 2000, they had much greater effects than in most previous elections.

Now, we have seen that the polls indicate real change in voter preferences during the 2000 campaign. This is interesting and important. However, it is fair to wonder about what caused preferences to change. Why did preferences vary so much during the campaign? What exactly happened to produce the evident ebb and flow? It is hard to tell for sure. We know that campaigns represent a series of many events. The problem is that most events are difficult to identify conceptually or empirically. Indeed, when studying campaign effects, political scientists typically focus on the effects of very prominent events, such as nominating conventions and general election debates in the United States (see, e.g., Holbrook 1996; Shaw 1999). (6) We can ask, How much of the variance in the polls is actually due to conventions and debates? How much reflects the many other events that occur during the course of a campaign?

Previous research (Wlezien and Erikson 2001) is useful here. This research indicates that conventions and debates account for only a modest portion of the variance in poll results. In 2000, for example, the entire convention and debate seasons account for at most 31 percent of the variance in the polls over the course of the election year. At least 69 percent of the variance thus reflects other things. During the autumn campaign, the entire debate season accounts for up to a mere 10 percent of the poll variance after Labor Day. The numbers are much the same for 1996. (7) The results tell us that the numerous small events, when taken together, had a much greater impact than the handful of very visible events that occupy most media and scholarly attention. The problem, of course, is that these "other" events are difficult to identify. It is even more difficult to actually detect their effects (see, e.g., Zaller 2002).

Thus far, we have seen that electoral preferences changed meaningfully during the course of the 2000 presidential election campaign, though we are not sure about what exactly caused preferences to change. Was it the behavior of candidates and their campaigns? Or was it the simple result of shifts in basic fundamental variables, such as the economy itself? We cannot tell. We can, however, tell whether the evident effects really mattered. That is, we can assess the extent to which they actually lasted to affect the outcome on election day.

An Analysis of Dynamics

Consider the time series of aggregate voter preferences ([V.sub.t]) during the 2000 presidential election cycle to be of the following form:

(1) [V.sub.t] = [alpha] + [beta][V.sub.t-1] + [[gamma].sub.t],

where [V.sub.t] is one candidate's percentage share in the polls and [gamma] is a series of independent campaign shocks drawn from a normal distribution. That is, preferences on one day are modeled as a function of preferences on the preceding day and the new effect of campaign events, broadly defined. The effect on any given day could reflect the delayed reaction to events from earlier days. Now, it has been shown elsewhere that this very simply equation allows us to characterize different general models of campaign dynamics (Wlezien and Erikson 2002). In theory, dynamics are directly evident from the coefficient [beta] in equation 1.

If 0 [less than or equal to] [beta] < 1, there are effects on preferences decay. As an autoregressive (AR) process, preferences tend toward the equilibrium of the series, which is [alpha]/(1 - [beta]). This equilibrium does not represent the final outcome, as what happens on election day also will reflect late campaign effects that have not fully dissipated by the time voters go to the polls. The degree to which late campaign effects do matter is evident from [beta], which captures the rate of carryover from one point in time to the next. The smaller the [beta], the more quickly effects decay. At the extreme, [beta] would be 0, which technically is a moving average process. Here, campaign events move true preferences a fraction of a percentage point or so on one day, and the shock fully dissipates by the next day. If this is the correct model, daily campaign effects would be of no electoral relevance except for those that occur at the very end of the race, on election day itself. In one sense, this characterization of campaign effects is implicit in most forecasting models of election outcomes, where the "fundamentals" are assumed to be constant for much of the campaign (see the collection of models in Campbell and Garand 2000).

Now, if [beta] equals 1.00, campaign effects actually cumulate. Each shock makes a permanent contribution to voter preferences. As an "integrated" process, preferences wander over the election cycle and become more and more telling about the final result. The actual outcome is simply the sum of all shocks that have occurred during the campaign up to and including election day. This clearly is a very strong model of campaign effects. It is the one implied by online processing models of voter preferences (see, e.g., Lodge, Steenbergen, and Brau 1995).

It may be that neither one of these models applies strictly to all campaign effects. That is, preferences may not evolve as either a pure AR or integrated process but as a "combined" process, in which some effects decay and others persist (Wlezien 2000). Certain events may have temporary effects and others permanent ones. Some events may produce both effects, in which preferences move and then bounce back, though to a different level. We can represent this process as follows:

(2) [V.sub.t] = [V.sup.*.sub.t-1] + [beta] ([V.sup.*.sub.t-1) + [[gamma].sub.t] + [u.sub.t],

where 0 [less than or equal to] [beta] < 1 and [u.sub.t] represents the series of shocks to the fundamentals. (8) In this model, some effects ([u.sub.t]) persist-and form part of the moving equilibrium [V.sup.*.sub.t]-and the rest ([[gamma].sub.t]) decay. The ultimate outcome is the election day equilibrium [V.sup.*.sub.ED], which is the final tally of [u.sub.t] plus the effects of late-arriving campaign effects ([[gamma].sub.t]) that have not fully decayed. As a combined series, preferences would evolve much like an integrated process, though with less drift. (9) Indeed, statistical theory (Granger 1980) tells us that any series that contains an integrated component is itself integrated. The intuition is that because of its expanding variance over time, the integrated component will dominate in the long run. (10)

In theory, then, we can tell quite a lot about the persistence of campaign effects in 2000 by simply estimating equation 1 using our series of adjusted polls. We want to know whether the AR(1) parameter is equal to or less than 1.0: if the parameter equals 1.0, we can conclude that at least some effects persist; if the parameter is less than 1.0, campaign effects would appear to decay. The results of this basic analysis are shown in Table 3A. In the table, we can see that the AR parameter is 0.60 (SE = .07) for the full election year and only 0.43 (SE =. 11) for the post-Labor Day period. These estimates clearly are well below 1.0, and basic Dickey-Fuller (DF) tests in the first row of Table 3B indicate that they are significantly so (p < .00). (11) on its lagged level and then compares the t statistic of the coefficient. The intuition is straightforward. If the coefficient for the lagged-level variable is indistinguishable from 0, we know that future change in the variable is unrelated to its current level. This tells us that the variable is integrated. If the coefficient is negative and significantly different from 0, future change is related to the current level in predictable ways; that is, the variable regresses to the equilibrium or long-term mean of the series. This implies that the series is stationary. Note that appropriate critical values for the DF t-tests are nonstandard (see MacKinnon 1991).

The results suggest that the process is stationary and that all effects on preferences decay and quite quickly.

Employing the more general augmented Dickey-Fuller (ADF) test indicates something quite different, however. This can be seen in the second row of Table 3B. (12) These results imply that preferences are not stationary and that they actually are integrated instead; that is, at least some meaningful portion of the changes in preferences persists. Such contradictory DF and ADF results might seem troubling. They really are not surprising, however. Indeed, the pattern is exactly what we would expect of a series that combines integrated and stationary processes, where some effects last and the rest decay. (13)

The pattern of these results is provocative and suggestive. The results nevertheless may be deceiving. As mentioned earlier in the text, poll readings on consecutive days share overlapping reporting periods. Thus, it is possible that the apparent day-to-day relationship is artifactual, a mere tautology; that is, we may be representing (much of) the same things on both sides of the equation. The most obvious and basic way to address the issue is to examine the autocorrelation function for our series of poll readings across different lags. We are particularly interested in whether correlations remain fairly stable or else decline as the gap between readings is increased. If the process is stationary and campaign effects decay, the correlation between poll readings will decline geometrically. Specifically, if the AR(1) parameter is p, the correlation between poll readings x days apart is [[rho].sub.x]. If campaign effects persist, conversely, the correlation would remain fairly constant as the number of days between poll readings increases.

Table 4 presents the correlations between adjusted poll results and their lagged values over one to ten days, separately for the entire election year and the post-Labor Day period. Consider first the pattern of correlations for the full year in the first column. For expository purposes, the correlations are plotted in Figure 5. These correlations remain fairly constant across lags. The correlation with the poll reading at day t - 1 is understandably high (.63) given the overlapping polling periods on successive days. For lags of two to seven days, however, the correlations change slightly. Even with a lag of ten days, where there is no overlap among polling periods, the correlation between poll readings is a robust .43. This tells us that preferences are not strictly stationary--the correlations across lags do not decay geometrically. The pattern of correlations is actually what we would expect of a combined process, where some effects last indefinitely and others decay (Wlezien 2000). It implies that some meaningful portion of effects in 2000 did not decay and instead persisted over time. (14)

[FIGURE 5 OMITTED]

The correlations for the post-Labor Day period in the second column of Table 4 reveal a similar pattern (also see Figure 6). Although these correlations all are slightly lower and more erratic than those for the full year, they nevertheless remain fairly flat across lags. Of course, we still have not yet taken into account sampling error, which dampens the observed autocorrelations. To obtain true over-time correlations, we simply divide the observed correlations by the statistical reliability of the poll readings (Heise 1969). Assuming the upperbound estimate of 0.75 from Table 2A, the resulting correlations for the ten lags hover around .60. This is clear in Figure 7, which displays the adjusted autocorrelation function. That the correlations all are below 1.0 implies that some campaign effects did not last. That the correlations are flat, however, indicates that a significant portion of campaign effects actually did persist over time to affect the outcome. The 2000 presidential election campaign, it appears, really mattered.

[FIGURES 6-7 OMITTED]

Discussion

Political scientists debate the role of political campaigns and campaign events in electoral decision making (see, for example, Alvarez 1997; Campbell 2000; Gelman and King 1993; Holbrook 1996; Wlezien and Erikson 2002). An examination of polls during the 2000 presidential election cycle suggests that for this particular campaign, events mattered quite a lot, at least in comparison with previous elections. Although much of the variance of poll results over time is the result of survey error, a relatively large portion appears to reflect change in underlying preferences, both over the course of the full election year and the autumn campaign itself. Perhaps more important, there is strong evidence that a meaningful portion of the effects of the campaign did not decay but actually persisted over time to affect the outcome. The 2000 presidential election campaign, it appears, mattered quite a lot.

Of course, the analysis has focused on a single election in a single year. What about presidential elections in other years? What does the research tell us about the effects of election campaigns more generally? Clearly, one cannot generalize based on the foregoing analysis. After all, a fairly similar analysis of polls during the 1996 general election campaign revealed almost no effects whatsoever (Erikson and Wlezien 1999). Even in 2000, it is not clear how the campaign mattered. What events had effects? Which ones lasted and which, if any, decayed? The foregoing analyses offer little insight. What they offer are very general conclusions about the dynamics of electoral preferences in a particular year. Given the available data, this may be all that we can hope to provide.
TABLE 1

An Analysis of General Survey Design and House Effects on Presidential
Election Polls, 2000

Variable Election Year After Labor Day

Poll universe 0.12 (.89) 0.08 (.78)
Survey house 1.44 (.09) 2.15 (.01)
Survey date 3.88 (.00) 4.31 (.00)

[R.sup.2] .90 .88
Adjusted [R.sup.2] .68 .68
Mean squared error 2.79 1.74
Number of polls 295 135
Number of respondents 267,974 130,024

Note: The numbers corresponding to the variables are F statistics.
The numbers in parentheses are two-tailed p values.

TABLE 2A

Daily Variance of House-Adjusted Poll Readings in 2000

 Total Error True
Time Frame Variance Variance Variance Reliability

Full year 10.05 2.29 7.76 .77
Last sixty days 6.50 1.61 4.89 .75

TABLE 2B

Daily Variance of Trial-Heat Polls during the Final Sixty Days, 1944-96

 Total Error True
 Variance Variance Variance Reliability

Mean 4.21 1.92 2.45 .47
Median 3.87 1.82 1.74 .44

Note: Based on polls that have not been adjusted for house effects.

Table 3A

A Basic Autoregressive Model of Daily Poll Readings, 2000

 Election Year After Labor Day

Poll reading[t.sub.-1] .60 (.07) .43 (.11)
[R.sup.2] .40 .23
Adjusted [R.sup.2] .40 .21
Mean squared error 5.81 4.04
Number of cases 111 54

Note: The numbers in parentheses are standard errors.

TABLE 3B

A Diagnosis of Time-Serial Properties

 Election Year After Labor Day

Dickey-Fuller statistic -5.73 -4.99
MacKinnon p value .00 .00
Number of cases 111 54
Augmented Dickey-Fuller -0.09 -0.55
 statistic (a)
MacKinnon p value .96 .88
Number of cases 44 41

(a.) Using three lags of differenced poll readings.

TABLE 4

Autocorrelation Function for Daily Poll Readings, 2000

 Correlation

Lag Election War Post-Labor Day

t-1 .63 .48
t-2 .57 .52
t-3 .52 .37
t-4 .55 .53
t-5 .51 .44
t-6 .54 .55
t-7 .56 .34
t-8 .50 .42
t-9 .46 .36
t-10 .43 .39

Note: t = time.


(1.) Witness the U.S. presidential election of 2000 (see Wlezien 2001). There is some disagreement about the "predictability" of the outcome, however (see Barrels and Zaller 2001; Erikson, Bafumi, and Wilson 2001).

(2.) There is some research on congressional elections as well (Erikson and Sigelman 1995, 1996).

(3.) For surveys in the field for an even number of days, the fractional midpoint is rounded up to the following day. There is a good amount of variance in the number of days surveys are in the field: the mean number of days in the field is 3.57; the standard deviation is 2.39 days.

References

Alvarez, R. Michael. 1997. Information and elections. Ann Arbor: University of Michigan Press.

Bartels, Larry M., and John Zaller. 2001. Presidential vote models: A recount. PS: Political Science & Politics 33:9-20.

Box-Steffensmeier, Janet, and Renee Smith. 1998. Investigating political dynamics using fractional integration models. American Journal of Political Science 42:661-89.

Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes. 1960. The American voter. New York: John Wiley.

Campbell, James E. 2000. The American campaign: U.S. presidential campaigns and the national vote. College Station: Texas A&M University Press.

Campbell, James E., and James C. Garand. 2000. Before the vote: Forecasting American national elections. Thousand Oaks, CA: Sage.

Converse, Philip E., and Michael W. Traugott. 1986. Assessing the accuracy of polls and surveys. Science 234:1094-98.

Crespi, Irving. 1988. Pre-election polling: Sources of accuracy and error. New York: Russell Sage.

DeBoef, Suzanna. 2000. Persistence and aggregations of survey data over time: From microfoundations to macropersistence. Electoral Studies 19:9-29.

Erikson, Robert S., Joseph Bafumi, and Bret Wilson. 2001. Was the 2000 presidential election predictable? PS: Political Science & Politics 33:815-19.

Erikson, Robert S., and Lee Sigelman. 1995. Poll-based forecasts of midterm congressional elections: Do the pollsters get it right? Public Opinion Quarterly 59:589-605.

--. 1996. Poll-based forecasts of the House vote in presidential election years. American Politics Quarterly 24:520-531.

Erikson, Robert S., and Christopher Wlezien. 1999. Presidential polls as a time series: The case of 1996. Public Opinion Quarterly 63:163-77.

Gelman, Andrew, and Gary King. 1993. Why are American presidential election polls so variable when votes are so predictable? British Journal of Political Science 23:409-519.

Granger, C. W.J. 1980. Long-memory relationships and the aggregation of dynamic models. Journal of Econometrics 14:227-38

Groves, Robert M. 1989. Survey errors and survey costs. New York: John Wiley.

Heise, D. R. 1969. Separating reliability and stability in test-retest correlations. American Sociological Review 34:93-101.

Holbrook, Thomas. 1996. Do campaigns matter? Thousand Oaks, CA: Sage.

Johnston, Richard, Andre Blais, Henry E. Brady, and Jean Crete. 1992. Letting the people decide: Dynamics of a Canadian election. Kingston, Canada: McGill-Queen's Press

Lau, Richard. 1994. An analysis of the accuracy of "trial heat" polls during the 1992 presidential election. Public Opinion Quarterly 58:2-20.

Lazarsfeld, Paul F., Bernard R. Berelson, and Hazel Gaudet. 1944. The people's choice New York: Columbia University Press

Lebo, Matthew J., Robert W. Walker, and Harold D. Clarke. 2000. You must remember this: Dealing with long memory in political analyses. Electoral Studies 19:31-48.

Lewis-Beck, Michael S. 1988. Economics and elections. Ann Arbor: University of Michigan Press

Lodge, Milton, Marco Steenbergen, and Sean Brau. 1995. The responsive voter: Campaign information and the dynamics of candidate evaluation. American Political Science Review 89:309-26.

MacKinnon, James G. 1991. Critical values for cointegration tests. In Long run economic relationships, edited by

Robert F. Engle and Clive W. Granger, 267-76. New York: Oxford University Press.

Shaw, Daron R. 1999. A study of presidential campaign event effects from 1952 to 1992. Journal of Politics 61:387-422.

Traugott, Michael W. 2001. Assessing poll performance in the 2000 campaign. Public Opinion Quarterly 63:389-419.

Wlezien, Christopher. 2000. An essay on "combined" time series processes. Electoral Studies 19:77-93.

--. 2001. On forecasting the presidential vote. PS: Political Science & Politics 34:25-31.

Wlezien, Christopher, and Robert S. Erikson. 2001. Campaign effects in theory and practice. American Politics Research 29:419-37.

--. 2002. The timeline of presidential election campaigns. Journal of Politics 64:969-93. Zaller, John. 2002. Assessing the statistical power of election studies to detect communication effects in political campaigns. Electoral Studies 21:297-329

AUTHOR'S NOTE: An earlier version of this article was presented at the annual meeting of the American Political Science Association, San Francisco, September 2001. Portions were also presented at the University of Essex, Nuffield College, and Trinity College, Dublin, and in the seminar series of the Houston chapter of the American Statistical Association. The research forms part of a larger project with Robert Erikson that is supported by grants from the U.S. National Science Foundation (SBR-9731308 and SBR-0112856) and the Institute for Social and Economic Research at Columbia University. I thank Bruce Carroll, Joe Howard, Jeff May, and Amy Ware for assistance with data collection and management.

Christopher Wlezien is reader in comparative government and a fellow of Nuffield College at the University of Oxford. His research and teaching interests encompass a range of fields in American and comparative politics, and his articles have appeared in various journals and edited volumes. He recently coedited a book with Mark Franklin, The Future of Election Studies (Pergamon), and has begun research with Stuart Soroka on Degrees of Democracy.
COPYRIGHT 2003 Center for the Study of the Presidency
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Wlezien, Christopher
Publication:Presidential Studies Quarterly
Geographic Code:1USA
Date:Mar 1, 2003
Words:5905
Previous Article:The Clinton 2000 effect in perspective: the impact of retiring presidents on their parties' chances of retaining the White House.
Next Article:The contemporary presidency: postpresidential influence in the postmodern era. (Features).
Topics:


Related Articles
The money primary: what influences the outcome of pre-primary presidential nomination fundraising? (Articles).
Suburban Voting in Presidential Elections.
The second face of the public presidency: presidential polling and the shift from policy to personality polling.
The youth vote: gaining clout: postelection surveys show record turnout among young adults and black voters.
Do early birds get the worm? Improving timeliness of presidential nomination forecasts.
According to the latest poll ... With Election Day approaching, polls are making headlines. But it can be tricky figuring out what they really...
Polls and elections: how did the primary vote forecasts fare in 2008?
Central Election Commission preparing for presidential election in Kyrgyzstan.
Polls and elections: preelection poll accuracy in the 2008 general elections.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters