Printer Friendly

Are surveys of experts unbiased? Evidence from college football rankings.


There is a voluminous literature dedicated to determining the aptness of expert performance (Shanteau 1992), as well as uncovering the premier methods for eliciting their collective opinions (Otway and von Winterfeldt 1992). Additional research is present on when betting markets might outperform experts or vice versa (Sunstein 2006). Of principle importance in all these literature streams is the concern over various forms and sources of bias that could undermine the validity of these judgments. College football polls, which are weekly rankings of the "best" college football teams provided through tallying votes from coaches and specialized journalists, has served as an experiment in much of this literature on bias (Avery and Chevalier 1999; Fair and Oster 2007; Paul, Weinbach, and Coate 2007). These rankings represent the aggregate subjective judgment of groups of "experts," and their judgments are tested on a weekly basis as these teams compete against one another. The purpose of this article is to examine those judgments for evidence of any systematic biases. Evaluating college football polls for bias, as well as their overall accuracy, might provide some insight into how similar polling could be implemented for policymakers interested in ranking alternative proposals.

There has also been much interest among academics in the possibility of using prediction markets to inform public policy decisions. For instance, if policymakers were interested in increasing economic growth or reducing income inequality, they could potentially establish a betting market where the outcome was determined by some observed change in measured results, conditional on alternative policy proposals. Theoretically, the betting market would generate a price or line that was reflective of the aggregate beliefs of the population regarding the ability of the proposed policy to actually accomplish the objective(s). For this reason, betting on sports has been studied extensively as an experiment to inform what might become of policy betting markets. For instance, there seems to be a "long-shot" bias in betting markets (Wolfers and Zitzewitz 2007). However, if the polling of experts is considered an alternative approach, then it is important that it is similarly studied for bias.

This article analyzes both the American Football Coaches Association (AFCA) and Associated Press (AP) college football polls for several possible forms of bias. The AP poll surveys college football journalists selected by a committee in each state, and the AFCA poll represents the survey of head football coaches representing each conference. (1) Although the focus of this article is merely on uncovering the existence of a bias rather than the underlying causal cognitive or survey process, the variables tested for bias are motivated by the existing literature on expert and cognitive bias. For example, a school with a highly regarded journalism program may receive some sort of favorable treatment from polled journalists via the affiliation bias. Similarly, schools from a particular conference may benefit or suffer as a result of pollsters relying on simplified informational heuristics.

To test for bias, regular season games from 2003 to 2008 in which at least one team had been ranked in the top 25 are compiled into a data set. With more than 1,300 observations in both polls, the higher ranked team won the game 79.2% of the time in the AP poll, as compared to 78.4% in the AFCA poll. When both teams were ranked, the higher ranked team still won the game 66.67% of the time in both polls.

A probit regression is employed to determine which characteristics and attributes determine the probability that the higher ranked team wins the game. Ideally, the aggregate results of the poll should reflect all relevant information regarding the quality of teams, while the error should be random and not correlated with any other attribute like conference affiliation, or having a quality journalism school. If other attributes do reveal to be statistically significant determinants of a favored team being more or less likely to win on average, it is evidence of a systematic bias toward that attribute. In addition to the college team's conference affiliation and journalism school quality, this article examines the effect of a school based in the Northeast Census Region, county population, and county real per capita personal income. Furthermore, this article tests for evidence of an excessive herding bias by controlling for a weekly trend and initial season effects.

Inc results suggest that systematic biases do exist among conferences in both polls. However, the evidence does not support systematic bias related to schools in the Northeast region, nor schools that possess a top-15 ranked journalism school. The overall stability in the level of accuracy of the polls throughout the season suggests a lack of excessive herding or path dependence bias. Also, among proxy measures for market power, only a population bias appears to be statistically significant, with a bias against those schools located in higher populated areas. These biases carry relatively little influence on the overall predictive power on the polls, however, as the results suggest that correcting for these biases would only make the poll more predictive of the games' outcomes by about 1 %.

Finally, the probit technique employed in this article provides an approach for overcoming some of the existing methodological problems in the sports economics literature on conference bias. Some of the existing literature on conference bias attempts to model the votes a team receives, but a rather extreme form of simultaneity bias emerges because the total number of votes available to all teams is fixed. As will be discussed, our technique reveals estimates which indicate the direction of the bias, with findings which are different from some of the existing literature.


The areas and extent to which expert judgment can be relied upon by policymakers is critical to formulating policy areas that are often poorly understood by nonexperts (Otway and von Winterfeldt 1992). As information has become more abundant and accessible, there has been a growing literature on the best way to transform it into knowledge and access it for the purposes of making public policy. One could assign this purpose to several subjects of research that quickly produce a voluminous set of literature, but just a subset of the most relevant works will be provided below.

Broad agreement appears to be present within the literature regarding the fact that experts differ from their novice peers along several different dimensions. A number of researchers in cognitive science have observed that experts approach and evaluate problems differently in their subject area than their novice counterparts (Chase and Simon 1973; Chi, Feltovitch, and Glaser 1981; Larkin et al. 1980). Much of this difference is related to their subject-specific knowledge, which allows experts to draw upon a larger domain of information. For instance, expert chess players are more aware of common tactical errors, and therefore how to avoid them (Chase and Simon 1973). In addition, experts exhibit a better ability to call upon a larger set, ideally all, of the relevant information in rendering judgments (Shanteau 1992). The knowledge of all relevant information should help them avoid systematic errors, or biases, that are commonly exhibited by laymen (Christensen-Szalanski and Willham 1991).

Although in general, experts are considered more competent in their ability to render effective judgments on their subject area than their laymen peers, researchers have devoted attention to determining which subject experts actually outperform novices in their assessments. Shanteau (1992) provides a review of the expert competence literature and reports that it generally finds that experts perform superior on tasks with a specific set of characteristics. These characteristics tend to possess relatively constant stimuli, repeatedly experience similar conditions, and occur in more predictable behavioral domains (Shanteau 1992). For example, in Keren (1987), expert bridge players were found to be almost perfect in calibrating probabilities during tournament play, whereas amateur players experienced greater difficulty. Similarly, economists appear to do well in providing unbiased forecasts of key economic indicators (Brown and Mahal 1981).

However, expertise does not fully eliminate human bias. Medical doctors are susceptible to an availability heuristic bias, causing them to underestimate the likelihood of an epidemic disease for the most at-risk patients if they have recently experienced nonepidemic diseases (Poses and Anthony 1991). Neither are experts always immune to their biases stemming from political ideology or employer preferences (Lynn 1986; Olsen 1997).

Another rapidly growing line of research is more popularly known as "crowd wisdom." This research is intended to describe scenarios in which judgments are elicited from a suitably large number of nondeliberating people, and is generally studied in the form of surveys/polling or betting/prediction markets. Both approaches possess many methodological variations and ultimately have the purpose of aggregating information in a way that provides an unbiased estimate of an event parameter. For a poll, this estimate may be taken from the sample mean or proportion, while, for a prediction market, this is often a market price.

Prediction markets allow traders to buy and sell contracts based on the likelihood of future events, employing their own knowledge and beliefs about future events. In equilibrium, the prices of the items being traded within this financial prediction market reflect the sum of all the traders' individual expectations regarding future events (Luckner 2008). Furthermore, monetary repercussions exist for an incorrect future prediction, providing an incentive for uninformed agents to abstain from voting, as well as to seek out and exploit bias. Nevertheless, a lack of accuracy within prediction models can come from many sources, including: the number of traders, complexity of information, and the dividend structure of the market (Healy et al. 2010).

An alternative to prediction markets is polling from a random sample, and there are many examples where this approach appears to yield unbiased results (Lorge et al. 1958). For instance, by asking individuals to rank the size of ten slightly different sized piles of buckshot, Bruce (1936) found the group average guess was 94.5% accurate. Furthermore, polling experts seems to be even more advantageous, as concluded in a literature review comparing experts to laymen group forecasts by Armstrong (2001). The application of polls has also been a far less controversial route to informing public policymakers.

Considerable attention has been paid to sports, and college football in particular, as the subject of polls and betting markets. Avery and Chevalier (1999) compiled a table demonstrating how various experts performed in predicting winners in professional football matches when compared to the betting markets. None of the experts cited in Avery and Chevalier (1999) outperformed the betting markets by a statistically significant margin. Fair and Oster (2007) test the efficiency of various college football ranking systems by regressing their collective ranking change against the betting market spread. They found that there is no information in the system of polls that is not already incorporated in the point spread, but that the spread did contain information not employed in the compilation of polls. Similarly, Stone (2009) found that individual AP voters tended to ignore less salient signals of team quality throughout the season, suggesting that the polls were not importing all information while forming future expectations.

Paul, Weinbach, and Coate (2007) discovered that the information contained in the point spread leads to the number of votes that a specific teams receives week to week in the Coaches' Poll and the AP poll. Specifically, Paul, Weinbach, and Coate (2007) argue that the point spread acts as a proxy for the expectations of the voters with regards to the performance of the teams. If a team performs better than the point spread, the team will be rewarded with more votes, which can increase the ranking of the team. However, if the team fails to meet the spread, the team is penalized with less votes, and a possible decrease in ranking (Paul, Weinbach, and Coate 2007). Mirabile and Witte (2010) concluded that voters within the Coaches' Poll are often likely to rank recent opponents higher than other teams. Coleman et al. (2010) test the level of bias within individual voters within the AP poll. They found that individual voters possess bias favoring schools from the same state as the voter, conferences that are represented within the state for which the voter is located, as well as those schools that played games on certain television networks. The relevant implication of these findings is that the polls, just like the betting markets, implement existing information in the formation of expectations about the future, although their results also detect sources of bias within the poll as well. However, as will be discussed in the next section, results which try to predict changes in votes or rankings are straight jacketed into violating the independence of observations assumption required for interpreting traditional linear regression model coefficients. In other words, for one team to obtain more votes, there must be off-setting votes that are taken away from other teams, which biases regression coefficients.

In summary, previous research established that although expert polls are relatively accurate, in college football rankings, they do not include as much information as the betting markets. This article attempts to uncover the existence of aggregate bias in the college football polls. The difference between these errors is important: a poll can be unbiased while not using all information, or it could use all information and be biased. Furthermore, the individuals being polled may be biased just as the previous literature has found, but well-designed surveys might result in collectively unbiased polls.


The AP and AFCA polls are separably considered in this work, and Table 1 provides some basic information about their structure. Each poll employs the Borda ranking method, where those surveyed are asked to rank what they deem as the top-25 NCAA Football Bowl Subdivision teams from best to worst. Teams receive 25 Borda points for every first place point they receive, 24 for every second place vote, and so on. There were 120 eligible teams for these rankings in 2008. Each place in the top-25 ranking affords a team a set number of points, which are then aggregated across all voters to determine the final ranking. Both polls use voters that most individuals would likely to consider professional experts on which qualities make one football team better than another.

AP and AFC A Coaches' Poll Factsheet


Voters Non-AP spoils writers, and Coaches who arc
 broadcasters who cover members of the
 college football American Football
 regionally or nationally. Coaches Association

Voter selection Voters are chosen on the Voters are chosen on
 basis of the state of the basis of the
 their employing media conference of which
 outlet. The number of their team is a
 voters from each stale are member. The number of
 determined according to a representatives per
 formula that is conference is rounded
 proportional to the number from the number of
 of teams that are members learns in the
 of the NCAA Football Bowl conference divided by
 Subdivision (FBS). The two. There were 61
 formula is occasionally total voters in 2008.
 revised as teams are added but the number is
 to the FBS. and the total determined by
 number of voters is conference
 usually in the mid lo low membership. Each
 1960s. For the years of conference receives
 this study, this formula half as many voters
 was "2 to 1." so that a as they have
 stale with one or two institutions.
 teams gels one voter, and
 three or four teams gels
 two voters. Shirting in
 2009, this formula will be
 increased lo "3 to 1" to
 accommodate increase in
 number of teams. After the
 number of voters a stale
 receives is determined,
 die Bureau Chief for that
 state forms a panel that
 determines which media
 representatives should be
 invited to participate in
 the poll. These
 representatives must cover
 college football
 nationally or regionally
 who are "reliable and
 knowledgeable" and willing
 to commit lo the poll for
 1 year.

Frequency of Preseason, weekly, Preseason, weekly,
rankings postseason postseason

Individual Yes As of the 2005
Ballots public? season, coaches
 permitted to
 publicly release
 their ballots.

Miscellaneous A voter was removed from Coaches are allowed
 the poll board in to vole for their own
 mid-season 2006 after teams. The AFCA
 admittedly casting his prohibits coaches
 vote under the mistaken from voting for
 belief that a team in the schools on major NCAA
 rankings lost (see The probation.
 Associated Press 2006).
 Voters are permitted lo
 vote for teams on
 probation from the NCAA.

The AP poll surveys writers and broadcasters who specialize in covering NCAA college football at the national or regional level. (2) Each state is allocated a number of voters that is proportional to the number of residing teams within that state. The AP has bureaus in every state, and these state bureaus are delegated the responsibility of forming a panel that selects the voters from their respective state. They are instructed to pick "reliable and knowledgeable" voters willing to commit to the poll for 1 year. The first survey is conducted prior to the beginning of the season, then weekly throughout the season, and a final poll after the season has ended. During the season, most games are played on Saturday, although a limited number of games are played earlier in the week and on Sunday. The rankings submitted by each voter is published online at the AP's Web site.

The AP as an organization does not evaluate the performance of the voters in any formal manner. If a ballot is received that seems "out of place," as Taylor (2009) explains it, a representative from the AP will contact the voter only to verify that the AP received the ranking as the voter intended it to be. In one case, a voter did come forward with the acknowledgment that he based his rankings on an incorrect assumption about a particular game's outcome, and as a result was dismissed from the poll's hoard (The Associated Press 2006). Although the voters are determined on an annual basis, many do repeat from year to year.

The AFCA poll, also known as the USA Today poll after its current sponsor, surveys head coaches who are members of the AFCA. (3) The coaches who vote in this poll are collectively known as the "Board of Coaches." Although many individuals remain on the Board of Coaches from year to year, there is some replacement on an annual basis. Like the AP poll, voters representing the Board of Coaches are not randomly selected. The AFCA requires each conference to select its voters, and the number of voters a conference obtains is a function of the number of affiliated schools within that conference. The regular season final rankings submitted by coaches during the season were confidential until 2005. Since then, coaches are permitted to publicly reveal their rankings if they so choose.


Although there were 120 teams in Division I-A NCAA Football in 2008, only 25 schools in both the AP and AFCA poll are assigned an ordinal ranking based on the number of votes they receive. Two data sets, one for each poll, are created that include only the regular season games from 2003 to 2008 in which at least one team was ranked in the top 25 from the pol1. (4) As unranked teams cannot be assigned an ordinal rank, games featuring two unranked teams are excluded from the analysis. This exclusion leaves us with more than 1,300 observations in both the AP and AFCA polls from that time period.

The intention of this research is to uncover potential sources of bias using ex-post analysis. Most of the existing literature attempts to use what we would characterize as an ex-ante approach, which attempts to estimate the expected ranking or votes based on their recent performances (e.g., Coleman et al. 2010; Mirabile and Witte 2010). The ex-ante approach is characterized by the researcher trying to control for the quality of their recent performance, and then detect bias by seeing if quality unimportant variables carry significant explanatory power. However, as the ranking system is a zero-sum game, that is one team cannot obtain more votes without other teams receiving fewer, the observations are not independent and thus violate the Gaussian assumptions required for ordinary least squares (0LS). (5) Votes and rankings are also not real numbers in the sense that receiving twice as many votes or having a ranking twice as high as another team does not necessarily imply that the team is twice the quality of another. However, the large number of possible teams that can receive votes make the implementation of a ordered probit or logit regression to accommodate ordinal rankings unlikely to succeed. (6)

The ex-post methodology employed in this article is motivated by similar ex-post empirical tests of market efficiency in sports betting markets, in which the actual point spread is regressed on a constant and the betting spread (Sauer 1998). An unbiased market, in such a regression, uses all relevant information and will yield a constant of 0 and a coefficient of 1 on the betting spread. If there is a bias, then other variables capturing this bias will be significant when included in the specification. One of the advantages of this approach is that it does not require much information about the sport or the event itself, but is rather judging the ability of those betting on the game itself to use all relevant information. (7)

Similarly, this article uses the ordinal rankings of the polls to create a dependent variable which indicates whether or not the more highly ranked team wins (HRW) in game i for each poll:


Unlike votes or rankings the outcomes of games are independent observations and therefore do not violate classical regression assumptions. The forthcoming model will be estimated using the probit regression, where HRW* is the unobserved latent variable: (8)

( 1) [HWR.sub.i]* = [[beta].sub.0] + [[beta].sub.1][Z.sub.i] + [[member of].sub.i], [[member of].sub.i] ~ N [0, [[sigma]].sup.2] [HRW.sub.i] = 1 if [HWR.sub.i]* >0 and 0 otherwise

By definition of the dependent variable, the parameter on the intercept [[beta].sub.0] will represent the ability of the poll to predict the winner via attributing them a higher ranking. If having a higher ranking than an opponent prior to the game is directly correlated with actually winning it, then [[beta].sub.0] will be positive. This would he most analogous to the coefficient on the spread variable in betting market regressions, where all relevant information about these teams should be contained within the equation. Although much of the previous literature has attempted to estimate the influence of how polls react to events, this article assesses how well the polls used existing information to determine rankings. Now, it is the case that while this does not mistakenly treat votes or rankings as if they were real numbers, at this point 130 is adopting the opposite extreme by treating the vote distribution as non-informative beyond determining which team is the higher ranking. This will be discussed momentarily within this section, and serve as the basis for robustness checks in Section V.B.

In Equation (1), [Z.sub.i] represents a k x 1 vector of indicators that would not indicate pollster bias, but would make the ranking less predictive of the outcome through the 1 x k vector of parameters in [[beta].sub.1]. Ignoring these factors would bias the intercept coefficient [[beta].sub.0] because it is capturing the ability of the poll to correctly identify the winner. For instance, often experts claim to rank teams in terms of "neutral territory. (9) This is often described as a mental exercise where they form an evaluation of the team without consideration for their ability to win in their home stadium.' As very few regular season games are played in this manner, a dummy variable (Home) that takes a value of I when the game is played at the favorite team's home field, else 0. is included in the regression. The correlation coefficient on the Home indicator would then reflect the value of playing at home for the higher ranked team's chances of winning the game, and to exclude it would bias the intercept toward 0. Similarly, many schools develop rivalries within their conferences that are believed to make their outcomes less predictable. If this is true, excluding a control for intraconference games would similarly bias the intercept. Finally, an indicator variable for games in which both teams are ranked (Both Rank) adjusts the intercept to accommodate the fact that random error would play a larger role in games where teams are very close in subjective quality, and hence the polls will likely be less accurate for reasons unrelated to pollster bias.

The conjecture at this point is that if there is no pollster bias, the model in Equation (1) would be sufficient and that the inclusion of any additional variable would be irrelevant. Ideally, experts use all relevant information about the quality of the team in forming their rankings, for example, the skills of their players and coaches. The error arises from unobserved heterogeneity in the games, for example random events in competition, as well as from incomplete or misapplication of information. Also, error may arise because of "match-up" problems in sports where transitivity might not hold in ranking teams overall versus how they might compare in head-to-head competition. Absent bias, this unobserved error should not be systematically correlated with factors such as the schools' conference affiliation, journalism school quality, etc.

The rankings generated by the polls are a constant subject of debate among fans and media, and these polls are often criticized with accusations of bias. Although fans and football pundits seem. to identify unlimited sources of bias, we identify and test for what we perceive to be the most common accusations of bias. Although only the existence of bias per se is the relevant concern, motivations from the existing literature on bias studied in cognitive science and social psychology are also provided.

Two potential sources of bias that have already been mentioned are biases toward schools that are members of particular conferences and schools with prominent journalism programs. These can both be thought of as sources of in group bias. If those surveyed in the polls, journalists and coaches, self-identify these institutions to be members of their social groups, then social identity theory posits that they will derive part of their self-esteem from the achievements of that group (Tajfel 1982). Ultimately, social identity theory proposes that individuals are biased toward more favorable comparisons of their in group over their rival out groups, allowing them to manifest or maintain a greater social self-esteem. Empirical work is supportive of the existence of in group bias (Aberson, Healy, and Romero 2000), for instance Crocker and Luhtanen (1990) conducted an experiment in which subjects with high and low levels of social self-esteem were asked to rate their group after receiving group performance feedback on a test. They found that those individuals with high levels of social self-esteem reported their own evaluations of the group's performance in an in group enhancing manner, whereas those with low levels of social self-esteem did not. For the most part, both the AP and AFCA polls structure their survey in a manner that would mitigate in group bias. As discussed in Section HI, the AP poll draws of the media by state, with representation being a positive function of the number of teams in the state. The AFCA poll similarly draws coaches from their respective conferences. Nevertheless, conference biases might not be fully offset, or some other factor could be present that would induce a bias. The conventional wisdom of the sports news, and indeed in the sports economics literature is that the more prestigious conferences are likely to receive some preferential treatment, perhaps due to voter identity bias (Coleman et al. 2010; Mirabile and Witte 2010).

One of the more popular accusations of bias is known in the popular media as the "East Coast Bias." (10) Loosely speaking, this appears to be a label for perceived favorable rankings to teams from the Northeast region of the country. (11) Some have suggested that the sport's history and industrialization in the region, as well as a more favorable time zone for media coverage, are responsible for this bias (Schreiber 2008). (12) This is tested by including a dummy variable that identifies whether or not the team is from a state in the Northeast Census Region.

A related popular concern regarding media coverage is that media outlets provide additional attention, and perhaps even favorably skew their poll rankings, toward big market schools who would have a larger level of representation among their consumers. Murphy (2001) found a similar affiliation bias in expert congressional testimony on nicotine, where the technical biases and framing of questions was influenced by the sponsor's strategy through technical lab procedures and framing of questions. This kind of bias could also be a result of the competitive process, where the organizations most successful in receiving attention for their polls are those that appeal to their consumers who are disproportionately represented in select markets. To test for affiliation bias, a control is added for both population and real per capita personal income of the county in which the university bases its main campus. Another proxy variable for market size is endowment size, which is averaged over the sample period to make it time invariant for each school. (13)

Another possible source of bias in the rankings is the "underdog effect," which is the observation that entities which are expected to lose a competitive event tend to be viewed more favorably by onlookers. This is one of the most well-documented phenomenons in social psychology (Frazier and Snyder 1991; Goldschmied and Vandello 2009; Kim et al. 2008; Vandello, Goldschmied, and Richards 2007), and there is some evidence of it introducing market inefficiency into sports betting as well (Levitt 2004; Weyer and Aadland 2010). This might result in teams from conferences with traditionally lower quality teams getting some favorable treatment from the voters when teams from those conferences have a good year.

For the first week of the regular season, the rankings are based on the preseason polls. These preseason polls supposedly are based heavily on the end-of-previous season performance, expectations of returning players, and new recruits. In addition, teams sometimes schedule "tuneup" games in the first few weeks of the season that might cause the polls to have a different degree of accuracy than in the rest of the season. Although there is not an a priori reason to expect these decisions to favor any individual team or conference in any way different from the rest of the season, a common concern is that there are "bandwagon" or "excessive herding" biases that favor teams who were fortunate enough to get a good early season ranking. This bias would result in correlated errors as those being polled are influenced by the opinions of others in the survey. A good deal of evidence of this bias is present in political polls (see, e.g., Henshel and Johnston 1987: Mehrabian 1998: Morwitz and Plunzinski 1996). There is also supporting economic theory on information cascades that sometimes makes it privately rational for individuals to imitate their peers in sequential decisions (Bikhchandani, Hirshleifer, and Welch 1998). Furthermore, as time progresses throughout the season, the improved information available to voters should improve the accuracy of the polls. However, if the excessive herding bias effect exists and dominates, the testable implication of this bias is that the accuracy of polls will decline as the season progresses.

With these biases in mind, the general form of equation in (1) is expanded to include controls for potential sources of bias, written in vector form as follows:

(2) HRW* = [[beta].sub.0] + Z[[beta].sub.1] + [[delta].sub.1]SeasonLate + [[delta].sub.2]Week + XDiff [alpha] + HRD [[gamma].sub.1] + LRD [[gamma].sub.2] + [member of]

The profit regression of Equation (2) generates the necessary parameter estimates. To draw inference about possible bias in the polling process. one must carefully consider the interpretation of the coefficients. Again, the dependent variable indicates whether or not the poll favorite won the game, so the coefficient on the intercept, [[beta].sub.0] reflects the poll's ability to predict the winner of the game on average. The remaining coefficients will reflect how their associated trait affects the probability that the higher ranked team wins.

Both SeasonLate and Week are n x 1 vectors, whereas HRD and LRD are n x k matrices of dummy variables for specific attributes of the higher and lower ranked teams playing in the game. respectively. Week represents the week of the season in which each game was played, and SeasonLate is an indicator that takes a value of 1 if the game was played after the third week. Both these variables are included to test for the previously described excessive herding bias. If there is no systematic bias of this sort, polls will display similar levels of accuracy at every point in the season. If a bias in the polls unduly favors ranked teams from the beginning of the season, poll accuracy will decline as these teams are defeated by their underdog counterparts, causing [[delta].sub.1] and/or [[delta].sub.2] to be negative.

Potential sources of bias that are included. on a continuous scale are included as the differences between the favorite and the underdog in the n x m matrix XDiff Per capita personal income, for example, is included in XDiff of Equation (2) so that when the income levels are higher in the favorite team's county, the variable Income Diff becomes positive. If a bias exists favoring teams from higher income counties, then the corresponding column of the m x 1 vector of [alpha] will be negative, as the poll will be less likely to predict victory of the favored team. If there is a bias against this attribute, then [alpha] will turn positive. There are three continuous variables that this discussion applies to: the aforementioned income variable, population, and university endowment.

These characteristics in HRD and LRD will he useful in identifying the other discussed possible sources of bias, like the teams' respective conference, location in the Northeast region, and whether or not the university houses a top-15 journalism program. The interpretation of the variables for bias in Equation (2) is as follows:

1. [[gamma].sub.1]: If positive (negative), then when the higher ranked team has the associated attribute they are more (less) likely to win than the poll alone indicates, consistent with the favorite experiencing a bias to them that is unfavorable (favorable).

2. [[gamma].sub.2]: If positive (negative), then when the lower ranked team has the associated attribute, the more highly ranked team is more (less) likely to win, consistent with the lower ranked team experiencing a bias to the team that is Unfavorable (favorable).

For convenience, this intuition is summarized again in Table 2. One must examine both [[gamma].sub.1] and [[gamma].sub.2] to understand if an attribute, like conference membership, carries a systematic bias in the polls. As having a particular attribute enters into the model twice, once for the higher and once for the lower ranked team, a composite measure of pollster bias will be calculated as [[theat].sup.j]=[[gamma].sub.1.sup.j] - [[gamma].sub.1.sup.j]. If, for instance, a particular attribute has a sign indicative of a pessimism bias for both favorites and underdogs ([[gamma].sub.1.sup.j] > 0, [[gamma].sub.2.sup.j] < 0), then [[theat].sup.j] > 0 and this exercise will seem a bit redundant outside of an interest in statistical significance. However, if the exhibited pollster bias is asymmetric as indicated by [[gamma].sub.1.sup.j] and [[gamma].sub.2.sup.j] carrying the same signs, then the test on [[theat].sup.j] would indicate which effect dominates, and the interpretation would be in the same manner as a. This could be the case for conference indicators if there are different biases directed at particular teams within the conference during the sample period. Table 3 describes the variables used in the regressions, whereas Table 4 provides the means and standard deviations by poll.

Bias Intuition of Signs on Attribute Coefficients

[[gamma].sub.1.sup.j] [[gamma].sub.1.sup.j] Bias Toward
 Attribute j

+ - Against

- + In favor


Variable Descriptions and Sources

Variable (a) Description

Favorite Wins If the higher ranked team won the game is 1.
 else 0.

Home The game was played on the home field of the
 favored team.

Week The week of the regular season when the game
 was played.

Season Late If week > 3. then Season Start = 1, else

Higher Conf (b) The higher ranked team is a member of the Conf

Lower Conf (b) The lower ranked learn is a member of the Conf

Higher J-School The higher ranked team is from a university
(c) with a top-15 journalism graduate school.

Lower J School The lower ranked team is from a university
(c) with a top-15 journalism graduate school.

Higher NE The higher ranked team is from a state in the
Region Northeast Census Region (Maine, New Hampshire.
 Vermont, Massachusetts, Rhode Island.
 Connecticut, New York, Pennsylvania, and New

Lower NE Region The lower ranked team is from a state in the
 Northeast Census Region (Maine. New Hampshire,
 Vermont, Massachusetts, Rhode Island.
 Connecticut. New York, Pennsylvania, and New

Both Ranked Both teams playing in the game are ranked by
 the polls.

Win Diff The winning percentage of the favorite minus
 the winning percentage of the lower ranked
 team prior to playing the game. When it is the
 first game of the season, the end of the
 previous season record is used.

Rank Diff The rank of the lower ranked team minus the
 ranking of the higher team, conditional on the
 teams both being ranked.

Population Diff Favorite school's population of the county
(d) where the higher ranked team's main campus is
 located, less the corresponding value for the
 underdog (in millions).

Income Diff (d) Favorite school's real per capita personal
 income in the county where the higher ranked
 team's main campus is located, less the
 corresponding value for the underdog (in tens
 of thousands of 2000 dollars).

Endow Diff (e) Natural log of the average annual market value
 of university endowment of the higher ranked
 team, less the corresponding value for the
 lower ranked team (in millions of 2000

Vole Diff Number of Borda points of the team with the
 higher ranking, less the corresponding value
 for the lower ranked team, in thousands. When
 squared, this is in hundreds of thousands.

Under No Votes Dummy variable indicating that the lower
 ranked team received 0 Borda points.

(a.) All data compiled from unless otherwise noted.
(b.) Conference abbreviations: Atlantic Coast Conference (ACC):
Conference USA (CUSA); Mid-American Conference (MAC); Mountain West
Conference (MTW); Pacific-10 Conference (PAC-10); Southeastern
Conference (SEC); Western Atlantic Conference (WAC); No Conf:
Independent teams that do not affiliate with a conference.
(c.) U.S. News and World Report, "America's Best Graduate
Schools," 1996 Edition.
(d.) Bureau of Economic Analysis.
(e.) National Center for Higher Education Management Systems.


Summary Statistics fur the AP and AFCA Polls

 AP Poll USA Today Poll
Variable M SD M SD

Higher Wins 0.79 0.41 0.78 0.41
Home 0.58 0.49 0.58 0.49
Both Ranked o.l5 0.36 0.16 0.37
Win Diff 0.23 0.37 0.23 0.37
Rank Diff 0.13 0.38 0.21 0.96
Same Conference 0.60 0.48 0.66 0.47
Season Late 0.78 0.41 0.78 0.42
Week 7.45 3.96 7.37 3.96
Higher J-School 0.17 0.38 0.17 0.38
Lower J-School 0.11 0.31 0.10 0.31
Higher NE Region 2.89 1.25 2.90 1.25
Lower NE Region 2.7d 1.26 2.76 1.26
Population Diff 0.26 2.44 0.27 2.46
Income Diff -0.02 0.91 -0.02 0.92
Endow Diff 0.51 3.36 0.54 3.39
Higher ACC 0.12 0.33 0.12 0.33
Lower ACC 0.11 0.31 0.11 0.31
Higher Rig 10 0.16 0.37 0.17 0.37
Lower Big 10 0.13 0.34 0.14 0.34
Higher Big 12 0.16 0.37 0.16 0.37
Lower Big 1 2 0.13 0.34 0.13 0.34
Higher Big East 0.07 0.26 0.07 0.25
Lower Big East 0.07 0.26 0.07 0.25
Higher CUSA 0.02 0.12 0.01 0.11
Lower CUSA 0.05 0.21 0.05 0.21
Higher MAC 0.04 0.20 0.04 0.20
Lower MAC 0.04 0.20 0.04 0.20
Higher MTW 0.13 0.34 0.13 0.34
Lower MTW 0.12 0.33 0.12 0.33
Higher PAG-10 0.19 0.39 0.20 0.40
Lower PAC-10 0.15 0.36 0.15 0.36
Higher SEC 0.05 0.22 0.05 0.22
Lower SEC 0.06 0.23 0.06 0.23
Higher WAC 0.02 0.15 0.02 0.14
Lower WAC 0.02 0.14 0.02 0.13
Vote Diff 0.08 0.05 0.07 0.04
Under No Votes 0.85 0.36 0.84 0.37


A. The Main Results

Probit regression estimates with robust standard errors are presented for both polls in Tables 5 and 6. Equation (1) estimates are in the first column for each set of results, and include only controls for variables that are not expected to capture pollster bias. The next column, specification (2), adds the variables discussed in the previous section that would be indicative of bias. Specification (3) helps to check the robustness of the model's intuition by adding additional control variables for team quality that should already primarily be reflected in the intercept. These team quality variables added to specification (3) include the difference in win percentage between the two teams, the difference in rank. and the rank difference squared. (14) Finally, specification (4) includes all the quality control variables of specification (3) and the bias variables of specification (2). Table 6 provides the full list of the conference control variables due to the limited space in Table 5.

Probit Estimates for the Higher Ranked Team Winning the Game

 AP Poll

 (1) (2) (3) (4) (1)

Home 0.471*** 0.372*** 0.575*** 0.468*** 0.501***
 (0.080) (0.086) (0.106) (0.118) (0.081)

Both Ranked -0.439*** -0.396*** -0.190 -0.139 -0.515***
 (0.103) (0.107) (0.264) (0.289) (0.101)

Pct Win Diff 4.372*** 5.152***
 (0.398) (0.421)

Rank Diff -0.112 -0.254
 (0.595) (0.6611

Rank [Diff.sup.2] 0.197 0.262
 (0.291) (0.323)

Same Conference -0.285*** -0.1 13 -0.429*** -0.254 -0.256***
 (0.089) (0.138) (0.139) (0.210) (0.090)

Season Laic -0.118 -0.009
 (0.164) (0.261)

Week 0.007 -0.015
 (0.015) (0.022)

Higher Top -0.037 0.167
J-School (0.128) (0.168)

Lower Top -0.016 0.092
J-School (0.131) (0.180)

Higher NE Region 0.079 0.066
 (0.073) (0.105)

Lower NE Region -0.013 0.020
 (0.062) (0.087)

Population Diff 0.062*** 0.076**
 (0.019) (0.028)

Income Diff -0.081 -0.074
 (0.053) (0.076)

Endow Diff 0.056 0.062
 (0.040) (0.062)

Conference No Yes No Yes No

Intercept 0.833*** 0.773* 0.544*** 0.939 0.798***
 (0.090) (0.4631 (0.139) (0.648) (0.091)

Pseudo-[R.sup.2] 0.054 0.122 0.510 0.582 0.060

Adj-Count 0.000 0.001 0.111 0.121 0.002

 AFCA Poll

 (2) (3) (4)

Home 0.400*** 0.596*** 0.469***
 (0.086) (0.104) (0.114)

Both Ranked -0.462*** -0.412** -0.349*
 (0.105) (0.186) (0.198)

Pct Win Diff 4.022*** 4.561***
 (0.377) (0.403)

Rank Diff 0.251 0.201
 (0.224) (0.238)

Rank [Diff.sup.2] -0.029 -0.024
 (0.0241 (0.025)

Same Conference -0.092 -0.439*** -0.239
 (0.137) (0.137) (0.209)

Season Laic -0.151 -0.177
 (0.161) (0.257)

Week 0.012 -0.002
 (0.015) (0.019)

Higher Top -0.069 0.063
J-School (0.127) (0.165)

Lower Top -0.056 0.040
J-School (0.131) (0.165)

Higher NE Region 0.070 0.066
 (0.073) (0.093)

Lower NE Region -0.005 0.075
 (0.061) (0.080)

Population Diff 0.051*** 0.073***
 (0.018) (0.025)

Income Diff -0.084 -0.095
 (0.053) (0.069)

Endow Diff 0.071* 0.063
 (0.040) (0.057)

Conference Yes No Yes

Intercept 0.809* 0.588*** 0.685
 (0.474) (0.136) (0.581)

Pseudo-[R.sup.2] 0.123 0.491 0.552

Adj-Count 0.005 0.117 0.126

Notes: Sample size is 1,347 in AP poll. 1,325 in AFCA. Robust
standard errors are reported in parentheses. Conference
coefficients are reported in Table 6.
* Significant Lit 10% level; ** significant at 5% level, ***
significant at 1% level.


Conference Coefficients from Regressions in Table 5

 AP Poll AFCA Poll

 (2) (4) (2) (4)

Higher -0.021 0.329 -0.075 0.603
ACT (0.291) (0.477) (0.306) (0.494)

Lower -0.734*** -1.359*** -0.637** -1.446**
AO (0.260) (0.602) (0,269) (0.631)

Higher 0.211 0.522 0.160 0.546
Big 10 (0.331) (0(488) (0.347) (0.493)

Lower -0.366 -1.120** -0.437 -0.953*
Big 12 (0.275) (0.569) (0.286) (0.571)

Higher 0.598** 0.593 0.611* 0.811
Bit 12 (0.304) (0.500) (0.321) (0.498)

Lower -0.617** -1.352** -0.654** -1.290**
Big 12 (0.268) (0.591) (0.276) (0.609)

Higher -0.044 0.007 -0.124 0.150
Big (0.302) (0.517) (0.311) (0.507)

Lower -0.481* -0.982 -0.577** -1.146*
Big (0.261) (0.605) (0.267) (0.615)

Higher -1.026** -2.155** -0.865* -1.825*
MAC (0.479) (0.884) (0.515) (0.954)

1.mver 0.763* 1.068 0.485 0.892
MAC (0.456) (0.882) (0.390) (0.905)

Higher 0.481 1.193 0.291 0.871
MTW (0.408) -(0.628) (0.418) (0.633)

Lower -0.114 0.946 -0.042 -0.590
MTW (0.368) (0.783) (0.386) (0.749)

Higher 0.429 1.544*** 0.443 1.622***
PAC-10 (0.356) (0.581) (0.369) (0.557)

Lower -0.565** -1.797*** 0.638** -1.666***
PAC-10 (0.271) (0.562) (0.280) (0.577)

Higher 0.606** 1.122** 0.519* 1.231**
SEC (0.2861 (0.545) (0.299) (0.515)

Lmver -0.885*** -1.883*** -0.842*** -1.831***
SEC 10.2311 (0.634) (0.238) (0.038)

Higher 0.266 0.242 0.271 0.481
WAC (0.369) (0.578) (0.377) (0.529)

Lower 0.136 -0.574 0.030 -0.813
WAC (0.382) (0.738) (0.372) (0.620)

Higher 0.420 0.498 0.213 0.619
No Conl (0.391) (0.595) (0.396) (0.586)

Lower No -0.244 -0.982 -0.368 1.273
Cont (0.352) (0.7091 (0.376) (0.787)

Notes: Sample size is 1.347 in AP poll. 1.325 in AFC A. Robust
standard errors are reported in parentheses. Conference USA and
tile Sun Belt are the reference group. Based on estimates from
specifications (2) and (4) in Table 5.
* Significant Hi IQ9I level; **significant at 5% level:
***significant at 1% level.

All results are accompanied with the reporting of a McFadden (1974) pseudo-[R.sup.2], which is less than 0.13 for both polls in specifications (1) and (2), but near 0.5 when the quality control variables are added in specifications (3) and (4). (15) In addition, an adjusted-count [R.sup.2] ([R.sub.AdiC.sup.2]) is reported. which indicates the percentage of outcomes correctly predicted by the model beyond just guessing the average. In the context of this article, an [R.sub.AdiC.sup.2]) of .01 would imply that the model generates the correct prediction of the games' outcomes I% more often than simply always guessing that the higher ranked team wins.

If there was no systematic bias in the polls, the only significant explanatory variables would be those included in columns (1) and (3) following the specification of Equation (1). (16) In all specifications for both polls, the higher ranked team is more likely to win the game when they have home field advantage, but less likely when their opponent is from the same conference. The [R.sub.AdiC.sup.2]) is below 0.003 in specification (1) for both polls, likely because it controls for factors that are randomly distributed between the higher and lower ranked teams, so this information ends up not being a significant advantage over just blindly guessing the better ranked team to win every time. Including other variables that indicate the quality of the teams playing in the game, specification (3) increases [R.sub.AdiC.sup.2]) to 0.111 and 0.117 for the AP and AFCA poll, respectively. (17)

Comparing the [R.sub.AdiC.sup.2]) for models with the bias control variables, specifications (2) and (4), to their counterpart specifications without them, specifications (1) and (3), reveal the additional explanatory power these biases have on the outcomes. (18) For instance, specification (2) increases the [R.sub.AdiC.sup.2]) from specification (1) by 0.001 in the AP poll and 0.003 in the AFCA pol1. (19) This means correcting for bias would have correctly predicted the outcome of an additional 1.3 games in the AP poll and 3 games in the AFCA poll for the sample data. Comparing specification (4) to specification (3) indicates that controlling for bias would increase correct predictions over the sample data by 13.47 and 11.93 games in the AP and AFCA polls, respectively. (20) As will be discussed next, there are statistically significant sources of bias in the poll, but overall they are not particularly influential on the rankings. This is somewhat analogous to the findings on betting markets, the academic literature which does find some sources of bias, but usually this bias is small enough that one could not make an economic profit by exploiting it (Sobel and Ryan 2008).

Examining now the coefficients indicative of bias in specifications (2) and (4) in Tables 5 and 6, it is notable first that the results are substantively very similar. Although statistical significance differs slightly between the two sets of estimates, there are no cases where a statistically significant result changes its sign. The remainder of this section will focus on estimates from specification (4) as it has the higher level of precision, as evidenced by its goodness-of-fit statistics. None of the results in Table 5 provide any support for the notion of an excessive herding or bandwagon effect from Season-Late or Week. Their lack of statistical significance implies that the predictive (in)accuracy of the polls is about the same at every point in the season. If there was an excessive herding phenomenon, and teams were persisting in the ranking because of a lucky early placement, poll accuracy should decline as the season progresses. However, Season Late and Week are neither statistically significant individually nor jointly. (21) In contrast, Goff (1996) found evidence of path dependence in the AP poll from 1980 to 1989. Similarly, having a prominent journalism program had no statistically significant effect in any of the estimates in Table 5.

The indicator variables for the schools' presence in the Northeast region (NE Region) is intended to test for evidence of an "East Coast Bias." To test for the [theta] joint significance, discussed in the previous section as being [[theta].sup.j] = [[gamma].sub.1.sup.j] - [[gamma].sub.2.sup.j], the net effect of the East Coast Bias is 0.046 in the AP estimates but - 0.009 in the AFCA, and neither is statistically significant. (22) If the conference control variables were excluded, the magnitudes become more supportive of finding an East Coast Bias, although those estimates also lack statistical significance. The implication of these findings would suggest that there are conference affiliations that are important, but the evidence does not support a bias for being on the east coast per se. Furthermore, individual conference affiliation appears to be a stronger factor in inducing bias than location on the east coast.

The negative sign on Income Duff in Table 5 suggests that the poll is less likely to predict the winner when the favorite has a higher level of county income than the underdog, suggesting favoritism, but this effect is not statistically significant in any specification or poll. Having a larger county population is statistically significant at the 1% level and indicative of a pessimism bias. The signs on the school endowment are also consistent with a pessimism bias, but are not statistically significant in either poll. In summary, the estimates are more consistent with market size being a detriment to a school's poll ranking, rather than a source of favoritism. One explanation for this may be found in Paul, Weinbach, and Coate (2007), who find that the television coverage has an asymmetric effect on poll rankings in that it tends to punish losers far more than it rewards winners. If the control variables here serve as a proxy for media coverage, then the results here would be consistent with a bias against schools with more media exposure. Future research with better measures of media coverage will likely be able to sort this measure out more precisely.

In Table 5, the estimates for specifications (2) and (4) for each poll included the conference control variables, but because of space constraints these coefficients are reported separately in Table 6. The conference affiliation controls require more careful consideration to draw useful inferences because the reported correlation coefficients must all be interpreted relative to the excluded group, which in this case is mainly Conference USA. (23) The zero-sum nature of bias in ranking is such that favoritism toward one conference is also bias against another, and as a result drawing inference on the nature of the bias is more complicated than merely observing the signs on the coefficients in Table 6, as they only reflect statistically significant differences from the polls' ability to predict on Conference USA.

The fact that several conferences do appear to have significant differences in Table 6 does suggest that conference affiliation biases exists. However, even if no statistically significant differences appeared, other conferences could be statistically different from each other. To gain a more comprehensive view of the conference biases, the [theta] calculations are presented for the AP poll in Table 7 and the AFCA poll in Table 8. These estimates compare whether the better ranked team is more or less likely to win when the lower team hails from the conference in the column and is playing against a team from the conference in the row. The estimates along the diagonal of both tables represent an intraconference game. For example, the better ranked team in the AP is more likely to win when that favorite is from the Big 12 conference and the underdog is from the Big East by a statistically significant margin relative to the reference case. This bias is not symmetric. as a game played between a higher ranked Big East team and a Big 12 team is not statistically different from the reference case.

Net Conference Bias Between Higher and Lower Ranked Team in
AP Poll

 Higher Higher Higher Higher Higher Higher
 ACC Big 10 Big 12 Big MAC MTW

Lower 1.688* 1.880* 1.952** 1.351 -0.796 2.552*

Lower 1.444* 1.642* 1.714* 1.113 -1.035 2.313**

Lower 1.681* 1.874* 1.945** 1.345 -0.803 2.545**

Lower 1.311 1.504* 1.575* 0.975 -1.173 2.175**

Lower -0.739 -0.547 -0.475 -1.075 -3.223* 0.125

Lower 1.275 1.468 1.540 0.939 -1.209 2.140*

Lower 2.126** 2.318*** 2.390*** 1.790** -0.358 2.990**

Lower 2.212** 2.405*** 2.477*** 1.876* -0.272 3.077**

Lower 0.903 1.095 1.167 0.566 -1.581 1.767**

Lower 1.311 1.504 1.576* 0.975 -1.173 2.175**

 Higher Higher Higher Higher
 PAC 10 SEC WAC No Conf

Lower 2.902*** 2.481** 1.601* 1.857*

Lower 2.664*** 2.242** 1.363 1.618*

Lower 2.896*** 2.472*** 1.594* 1.850**

Lower 2.526*** 2.104** 1.224 1.4810

Lower 0.476 0.054 -0.826 -0.570

Lower 2.490*** 2.068** 1.189 1.444

Lower 3.341*** 2.919*** 2.039** 2.295**

Lower 3.427** 3.005*** 2.126** 2.382**

Lower 2.118** 1.696 0.816 1.072

Lower 2.526** 2.104** 1.224 1.480

Notes: Sample size is 1.347. Conference USA and the Sun Bell are
the reference group. Based on estimates from specification (4) in
Table 6.
F tests of statistical significance indicated at * 10% level; ** 5%
level; *** 10% level.


Net Conference Bias Between Higher and Lower Ranked Team in

 Higher Higher Higher Higher Higher Higher
 ACC Big 10 Big 12 Big MAC MTU

Lower 2 049** 1,992** 2.257** 1.596 -0.379 2.317**

Lower 1.556* 1.499** 1.764** 1.103 -0.872 1.824*

Lower 1.892*** 1.835** 2.101** 1.440 -0.535 2.160**

Lower 1.748* 1.691* 1.957** 1.295 -0.679 2.016**

Lower -0.289 -0.346 -0.081 -0.742 -2.716 -0.021

Lower 1.192 1.135 1.401 0.739 -1.235 1.460

Lower 2.269** 2.212*** 2.477**' 1.816** -0.159 2.537***

Lower 2.434** 2.377** 2.642*** 1.981** 0.007 2.702***

Lower 1.415 1.358 1.624* 0.963 -1.012 1.683*

Lower 1.875* 1.818* 2.084** 1.422 -0.552 2.143*

 Higher Higher SEC Higher Higher No
 PAC 10 WAC Conf

Lower 3.068*** 2.677**** 1 927** 2.065*

Lower 2.575*** 2.184* 1.434* 1.572

Lower 2.911*** 2.521*** 1.771** 1.909*

Lower 2.767*** 2.376** 1.627* 1.765*

Lower 0.730 0.339 -0.411 -0.272

Lower 2.211** 1.820* 1.071 1.209

Lower 3.288*** 2.897*** 2.147** 2.285**

Lower 3.453*** 3.062*** 2.312** 2.450***

Lower 2.434** 2.044** 1 294 1.432

Lower 2.894*** 2.503** 1.754* 1.892

Notes; Sample size is I.32S. Conference USA and the Sun Bell are
the reference group. Based on estimates from sped fixation (4) in
Table 6.
F tests of Statistical significance indicated at * 10% level; **5%
level; *** 1% level.

If no conference affiliation bias existed, the expectation would be for the estimates in Tables 7 and 8 to asymptotically approach 0. However, if these polls are treated as forecasts of game outcomes, both tables demonstrate that conferences clearly experience "upsets" at different frequencies. To gain some perspective of how differently these conferences are treated. Table 9 ranks conferences according to the diagonal values listed in Tables 7 and 8, which represent the direction and magnitude of the bias observed in intraconference games. Again. negative values would indicate relative favoritism.

According to the estimates in Table 9, the Mid-American Conference (MAC) is the only school that experiences a favoritism bias relative to the reference group, a bias which appears in both polls. The Pacific-10 Conference (PAC-10) experiences the most pessimism bias in both polls, as indicated by having the largest positive value estimate, whereas the Southeastern Conference (SEC) experienced the second most. The two polls differ in the rankings in Table 9 from Spots 3 through 7. Generally, this ranking seems to reveal that schools tend to experience more favoritism when they lack an automatic bowl placement for the conference champion. (24) Conference USA, the Western Atlantic Conference (WAC), the Sun Belt, and the MAC occupy the ranking slots above nine, suggesting they exhibit more favorable treatment from pollsters. The Mountain West, the only remaining conference without an automatic bowl bid, is third in the AP and sixth in the AFCA poll. The Big East is the highest occupying conference with an automatic BCS bid in this measure of favoritism. Pollsters might favor these conferences that do not receive an automatic bid to help improve their visibility in the bowl games.

Magnitude of Intraconference Pessimism Bias by Poll

 AP Poll Coaches Poll

Conf [[gamma].sub.1] - Rank [[gamma].sub.1] - Rank
 [[gamma].sub.2] [[gamma].sub.2]

ACC 1.688* 5 2.049** 4

Big 10 1.642* 6 1.499* 6

Big 12 1.945* 4 2.101** 3

Big East 0.975 8 1.295 8

Reference 0.000 10 0.000 10
Group (a)

MAC -3.223* 11 -2.716 11

MTW 2.140* 3 1.460 7

PAC 10 3.341*** 1 3.288*** 1

SEC 3.005*** 2 3.062*** 2

WAC 0.816 9 1.294 9

No Conf 1.480 7 1.892 5

Notes: Based on estimates from specification (4) in Table 6.
(a.) Conference USA and the Sun Belt are the reference group.
* Significant at 10% level; **significant at 5% level;
***significant at 1% level.

The underdog effect described in Section IV might explain why voters tend to move these teams further up in the rankings than they otherwise would, with the result being that they are disproportionately more likely to experience an upset. Voters may be more responsive to these lower profile schools and want to give them a chance at a BCS birth when they are undefeated late in the season. When MAC schools appear as the favorites within the data, they tend to be undefeated with around nine wins, whereas other conferences appear earlier in the season with fewer wins. Conversely, when the SEC appears as the underdog in the polls, this mainly occurs during later weeks in the season, but no clear pattern appears to emerge when the PAC-10 appears as the underdog in the polls. These findings on conference bias are generally different from the existing literature on conference bias which tend to find favoritism attributed to the higher profile schools, such as in Mirabile and Witte (2010) and Coleman et al. (2010). This is most likely driven by the ex-ante versus ex-post methodological differences, which were discussed in the beginning of Section IV.

B. Robustness Check

As a final robustness check, the Both Ranked and Rank [Diff.sup.2] variables are replaced using the Borda points from the two polls. (25) As discussed in Section III, the distribution of votes are not real numbers that represent information about teams' respective qualities. In other words, a team with twice as many votes as another cannot be said to be "twice as good" as that team, a limitation that many papers have been forced to accept in ex-ante approaches. The ex-post approach to some extent adopts the mirror image of this problem by ignoring the votes as if they have no information beyond determining which team has the higher ranking. The intuition follows by recalling that the rankings for team j ([R.sub.j]) are derived from a simple function based on tallies from in voters ([V.sub.m]). As described in Section III, the parameter on the intercept [[beta].sub.0] captured the effect of the poll to predict the winner via attributing them a higher ranking. Bias in the intercept was then reduced using the Both-Rank and Rank [Diff.sup.2] controls. However, the distribution of the rankings could differ considerably from the distribution of votes ascribed to each team. Like the rankings, the Borda points derived from vote tallies do not represent a cardinal ranking. However, there might be some relationship between the Borda points and the differences in team quality that would improve the precision of the regression coefficients by better reflecting that information.

The model is amended in this section by creating a control variable for the lower ranked team receiving zero Borda points (Under No Votes). another variable for the difference in Borda points between the higher and lower ranked teams (Vote Dill), and the quadratic term for Vote Dill Tables 10 and 11 replicate the results in Table 5 and 6, respectively. The new controls take the expected sign, as the vote difference between the higher ranked and lower ranked team rises, so does the likelihood that the higher ranked team wins. The results for the other controls are extremely similar to those presented in the main results, with the most major differences being a few changes in statistical significance levels among the specific conferences in Table 11.

Probit Estimates for the Higher Ranked Team Winning the Game

 AP Poll

 (1) (2) (3) (4) (1)

Home 0.458*** 0.356*** 0.591*** 0.487*** 0.479***
 (0.080) (0.085) (0.105) (0.1 18) (0.080)

Vote Diff 0.782* 0.740
 (0.471) (0.509)

Vote [Diff.sup.2] -0.181 -0.191
 (0.284) (0.315)

Under No Votes 0.018 0.021
 (0.121) (0.132)

Win Dill 4.342*** 5.137***
 (11.404) (0.427)

Same Conference -0.328*** -0.128 -0.414*** -0.255 0.313***
 (0.088) (0.137) (0.140) (0.208) (0.089)

Season Late -0.105 -0.017
 (0.164) (0.262)

Week 0.007 -0.011
 (0.015) (0.022)

Higher Top -0.038 0.1 66
J-School (0.1261 (0.172)

Lower Ton.J-School -0.021 0.122
 (0.132) (0.181)

Higher NE Region 0.071 0.040
 (0.072) (0.103)

Lower NE Region -0.021 -0.006
 (0.061) (0.086)

Population Diff 0.059*** 0.072**
 (0.018) (0.029)

Income Diff -0.077 -0.050
 (0.052) (0.079)

Endow Diff 0.058 0.033
 (0.040) (0.063)

Conference No Yes No Yes No

Intercept 0.793*** 0.873* 0.106 0.803 0.753***
 (0.090) (0.460) (0.228) (0,719) (0.090)

Pseudo-[R.sup.2] 0.041 0.112 0.518 0.586 0 042

Adj-Coum [R.sup.2] 0.000 0.005 0.1 14 0.125 0.005

 AFCA Poll

 (2) (3) (4)

Home 0.376*** 0,619*** 0.498***
 (0.086) (0,102) (0.1 12)

Vote Diff 1.091** 1.277**
 (0.508) (0.554)

Vote [Diff.sup.2] -0,409 -0.561
 (0.315) (0.350)

Under No Votes 0.137 0.103
 (0.120) (0.130)

Win Dill 3.961*** 4.523***
 (0. 386) (0.419)

Same Conference -0.101 -0.425*** -0.237
 (0.135) (0.138) 10.210)

Season Late -0.151 -0.207
 (0.160) (0.262)

Week 0.011 0.000
 (0.015) (0.019)

Higher Top -0.063 0,056
J-School (0.126) (0.167)

Lower Ton.J-School -0.081 0.062
 (0.131) (0.167)

Higher NE Region 0.061 0.046
 (0.072) (0.094)

Lower NE Region -0.012 0.051
 (0.061) (0.079)

Population Diff 0.051*** 0.070***
 (0.018) (0.026)

Income Diff -0.076 -0.057
 (0.052) (0.072)

Endow Diff 0.072* 0.029
 (0.040) (0.058)

Conference Yes No Yes

Intercept 0.921* -0.041 0.315
 (0.472) (0.228) (0.654)

Pseudo-[R.sup.2] 0.109 0.500 0.559

Adj-Coum [R.sup.2] 0.006 0.120 0.129

Notes: Sample size is 1.347 in AP poll. 1.325 in AFCA. Robust
standard errors are reported in parentheses. Conference
coefficient* reported in Table 11.
* Significant at 10% level; **significant 5% level; ***significant
at 1% level.

The [R.sub.AdjC.sup.2] is slightly higher in the robustness checks of Table 10 than in the main results of Table 5. This appears to be. largely the result of the improved performance of the baseline model in specifications (I) and (3), as the specifications that control for bias improve the predictive power of the model from 0.011 to 0.025 in specification (3) for the AP poll. This extra 0.014 in [R.sub.AdjC.sup.2]. is slightly less than the improvement of 0.018 found in Table 5.

Conference Coefficients from Regressions in Table 10

 AP Poll AFCA Poll

Higher ACC -0.059 0.372 -0.1 18 0.636
 (0.291) (0.488) (0.306) (0.507)

Lower ACC -0.789*** -1.454** -0.695*** -1.514**
 (0.257) (0.633) (0.266) (0.672)

Higher Big 0.142 0.494 0.061 0.516
10 (0.330) (0.505) (0.345) (0.509)

Lower Big -0.461* -1.251** -0.528* 1.066*
10 (0.275) (0.610) (0.282) (0.609)

Higher Big 0.513* 0.642 0.510 0.833
12 (0.303) (0.509) (0.321) (0.509)

Lower Big -0.653** -1.532** -0.709*** -1.433**
12 (0.264) (0.631) (0.270! (0.650)

Higher Big -0.066 -0.021 -0.147 0.133
East (0.302) (0.531) (0.313, (0.518)

Lower Big -0.547*' -1.051* -0.644** -1,219*
East (0.262) (0.637) (0.269) 10.652)

Higher MAC -1.054** -1.993** -0.889* -1.600*
 (0.474) (0.903) (0.514) (0.972)

Lower MAC 0.716 0.788 0.471 0.621
 (0.441) (0.903) (0.382) (0.916)

Higher MTW 0.448 1.161* 0.248 0.880
 (0.4051 (0.645) (0.414) (0.630)

Lower MTW -0.160 -1.065 -0.104 -0.726
 (0.363) (0.822) (0.380) (0.772)

Higher 0.345 1.513** 0.356 1.642***
PAC-10 (0.354) (0.592) (0.368) (0.574)

Lower -0.628** -1.994*** -0.703** -1.871***
PAC-10 (0.271) (0.591) (0.278) (0.614)

Higher SEC 0.567** 1.160** 0.486 1.324**
 (0.285) (0.566) (0.300) (0.537)

Lower SEC -0.985*** -2.021*** -0.964*** -2.001***
 (0.228) (0.676) (0.234) (0.688)

Higher WAC 0.227 0.255 0.230 0.472
 (0.367) (0.587) (0.375) (0.538)

Lower WAC 0.1 16 -0.683 0.014 -0.877
 (0.377) (0.771) (0.367) (0.653)

Higher No 0.389 0.422 0.179 0.502
Conf (0.392) (0.597) (0.397) (0.596)

Lower No -0.316 -1.075 -0.444 -1.338*
Conf (0.351) (0.728) [0.371) (0.8O0)

Notes: Sample size is 1.347 in AF poll. 1.325 in AFCA. Robust
standard errors are reported in parentheses. Conference USA and the
Sun Belt are the reference group. Based on estimates from
specifications (2) and (4) in Table 10.
* Significant at 10% level: **significant at 5% level;
***significant at 1% level.


This article tests for sources of bias in the AP and AFCA college football rankings from 2003 to 2008. The model presented in this article is unique in that it tests for sources of systematic error, rather than error arising from missing information. To do this, the article tests the rankings using an ex-post method based on the outcomes of games, which differs from the traditional literature which attempts to forecast the change in votes on the basis of recent performances. The results from more than 1,300 games indicate that favorites experience upsets at different frequencies that are correlated with conference affiliation, and much of these differences are statistically different from what would be expected if error was random.

In contrast to the conventional wisdom of college football poll bias, however. these biases are unrelated or against. many of its proposed sources. There is no evidence of a bias favoring schools from the eastern coast, nor schools with highly ranked journalism programs. Poll accuracy is relatively constant throughout the season, suggesting the absence of path dependence or excessive herding. Of the proxy variables for market power (school endowment, county population, county income), only population is statistically significant albeit with a sign indicating a bias against schools from more heavily populated areas.

College football is a big industry in its own right, and perhaps for that reason there seems to be reoccurring interest from Congress in formulating policy to reform the BCS, which is partly based upon a combination of rankings. The College Football Playoff Act of 2009 was introduced into Congress and has passed through the earliest stages of voting. In 2010, Senator Orrin Hatch (R-UT) responded to a BCS report by stating that "the problem is that the small number of privileged schools that participate in the closed system have been unwilling to provide students, athletes, and fans with what they deserve: a fair, unbiased system like the kind they have in literally every other NCAA sport" (The Associated Press 2010). Our results would suggest that conference-based bias plays a relatively small role in human-based rankings of experts, so suggested policies might be addressing a. smaller bias than suggested by fans and

The interest of policymakers in We BCS notwithstanding. the motivating interest in this article is really in the ability of surveys of experts to provide unbiased rankings, Experts are clearly an important part of the policymaking process. They are called in to testify before policymakers, petitions of experts supporting or opposing particular policies are often solicited by influential politicians, and policy think tanks sometimes generate more complicated surveys of expert opinions on important issues. (26) These are costly actions undertaken by agents interested in change, suggesting that expert opinion can be important and influential. Experts in these polls do not have a direct financial incentive to overcome bias, unlike traders in a betting market. However, prediction markets have met a great deal of political resistance to being applied to public policy, whereas surveys and soliciting expert opinion is commonplace. (27) College football provides a case where polls of experts are asked to provide an unbiased ranking of football team quality, just as policy experts might he polled on which policy alternatives are most appropriate. Except for the presence of conference affiliation bias, the findings here are generally supportive of aggregate expert opinion. Not only did both polls predict the winner almost 80% of the time, but many of the popular claims of bias do not hold up to empirical scrutiny and correcting for bias would only improve prediction of the games outcome from the rankings by about I% to 2%.

If external validity holds to experts of other fields, then surveys designed to counteract bias might he similarly beneficial. In economics for instance, there exist many examples of surveys of economists on various policies, and some discussion of these can he found in Alston, Kearl, and Vaughan (1992), Fuchs, Krueger, and Poterba (1998), and Whaples (2006). If economists hold predictable forms of bias. such as the political ideology-based bias suggested in Barkley (2010), then it may be important to stratify the sampling to counteract it in a manner similar to the AP and AFCA polls.

Finally, the "ex-post" approach employed in this article overcomes some of the methodological problems of the "ex-ante" approach in the sports economics literature on conference bias. Future researchers interested in the causal mechanisms of bias to the conferences. such as having famous coaches or more favorable time television slots, could adopt some of the same variable choices used in the existing literature, but adopt the ex-post methodology we demonstrate here.


ACC: Atlantic Coast Conference

AFCA: American Football Coaches Association

AP: Associated Press

BCS: Bowl Championship Series CUSA: Conference USA

HRW: Highly Ranked Team Wins

MAC: Mid-American Conference

MTW: Mountain West Conference

OLS: Ordinary Least Squares

PAC-10: Pacific-10 Conference

SEC: Southeastern Conference

WAC: Western Atlantic Conference

(1.) The AFCA poll is currently sponsored by USA Today.

(2.) Much Of our information about the AP poll is not readily available for public knowledge. As a result, much of the information provided was relayed to us by the Associated Press Sports Editor. Terry Taylor (2009). whose time we greatly appreciate.

(3.) Since 1991. the coaches' poll has been sponsored or cosponsored by various different media organizations. Specifically. among their sponsors have been ESPY. CNN. and USA Today (Carey 2005).

(4.) Bowl games; which occur at the conclusion of the regular season, were not included in the analysis because the majority of them are selected by committees. It is widely believed that the selection of teams is driven by factors besides the overall ranking of the team, such as the ability of the match-up of teams competing to draw high television ratings. Our concern was that this, in effect, might result in. bowl committees looking for match-ups that have a high probability of upset, that would confound the results from the regular season.

(5.) The interpretation of a [beta] coefficient in a OLS regression of y on x is a result of the assumption that ([partial derivative] [y.sub.i]; /[partial derivative] [x.sub.i]) = ([partial derivative] y/[partial derivative] [x.sub.i]) = p. However, if the independence of observations assumption is violated, then this relationship does not hold. In a system with a fixed number of ranking slots or votes, as is the case in college football polls, then [partial derivative] y/[partial derivative] [x.sub.j] = 0. Spatial econometricians will recognize this as a special case of a spatially autoregressive dependent variable, where all off-diagonal elements of the spatial weight matrix is nonzero and thus is overidentified. See LeSage and Pace (2009) for an extended discussion of this bias and the difficulties in interpretation of correlation coefficients, as well as Anselin (1988, 24-26) or Blommestein (1985) regarding the overidentification problem.

(6.) For example, Coleman et al. (2010) noted failed convergence by both ordered probit and legit, and therefore had to proceed by estimating with a Tobit regression on the rank a team received.

(7.) In contrast, Lebovic and Sigelman (2001) studied how teams' AP ranking changed in response to various performance indicators.

(8.) The final results were virtually identical to those produced by a logit regression.

(9.) An alternative explanation of the same phenomenon is mentioned in Fair and Oster (2007).

(10.) See Schreiber (2008) for an extensive discussion on causes and reasoning of the East Coast Bias.

(11.) An earlier version of this article employed a control for teams from the Eastern Time Zone, rather than the Northeast Region. We appreciate an anonymous referee, who pointed out that this was too literal of an interpretation of the phenomenon, and that it is really directed toward teams more closely affiliated with the "northeastern megalopolis."

(12.) The time difference likely instills what the consumer psychology literature refers to as the mere exposure bias, the phenomenon that people unknowingly develop a favorable preference toward something merely by being exposed to it. Exposure bias enjoys a large and robust amount of support in empirical work (e.g., see Janiszewski 1993; Zajonc 2001).

(13.) An earlier version of this article used average annual alumni donations, which had many missing observations. We appreciate an anonymous referee for suggesting the use of this variable as an alternative.

(14.) The authors would like to thank an anonymous referee for suggesting these additional specifications with quality control variables.

(15.) As the probit regression is estimated by solving for the parameters that maximize the likelihood function. the McFadden pseudo-[R.sup.2] is I--lnL/ln[L.sub.0], where Int. is the resulting log likelihood score from the full model and In[L.sub.0] is the score from a model that uses only the intercept. As the intercept in this model represents the predictive power of the polls, the better the polls are at predicting the winner the lower this goodness-of-fit measure becomes.

(16.) To determine the statistical power of the model, a beta test was employed to compare specification (1) against specification (2) and specification (3) against specification (4). Assuming the bias variables only explained 1% of the variation, the beta test for the probability of a type two error indicated that a sample size just over 700 would have a beta of 0.95. As the actual sample size is almost twice that, the model is not likely to accept a null hypothesis that is actually false.

(17.) Tests for multicollinearity revealed that the sample size was sufficient to identify the independent variation coming from these variables.

(18.) By contrast with the traditional OLS [R.sup.2], the [R.sub.AdiC.sup.2]) does not necessarily increase with the inclusion of variables.

(19.) The AP figure is calculated as (0.001--0.000 = 0.001), and the AFCA figure is based on (0.005--0.002 = 0.003).

(20.) This is calculated as 1,347 x (0.121 - 0.111) = 13.47 and 1,325 x (0.126 - 0.117) = 11.93.

(21.) Joint significance based on F test that SeasonLate = Week = 0, results available upon request.

(22.) These results are consistent with our findings in an earlier version of the article that used the Eastern Time Zone instead of the Northeast Census region as the test for an East Coast Bias. Those results are available upon request.

(23.) As well as the Sun Belt Conference and several smaller NCAA Division I-AA schools that appeared infrequently within the data set. These schools often appeared in the data solely as unranked competition for a top-25 school during the first few weeks of the season.

(24.) Automatic bids to Bowl Championship Series (BCS) games are provided to the champion of certain Division 1-A conferences: Atlantic Coast Conference (ACC), Big East, Big 10, Big 12, PAC-10, and SEC. For the champion of Conference USA, MAC, Mountain West Conference, Sun Belt Conference, and the Western Athletic Conference, an automatic bid to a BCS game is provided only if the team ascertains a rank of 12 or higher in the final BCS Standings, or the team ascertains a rank of 16 or higher in the final BCS standings AND a champion from a conference receiving an automatic bid possess a rank lower than 16 in the BCS standings. Notre Dame receives an automatic hid to a BCS game if the team ascertains a rank of 8 or higher in the BCS standings.

(25.) We would like to thank an anonymous referee for this suggestion.

(26.) For example. during the debate over the American Recovery and Reinvestment Act of 2009, Republicans solicited a petition from economists On their opinion of the stimulus plan (, while the Center for Economic and Policy Research issued their own in support of the plan ( For surveys, the Livingston Survey produced by the Federal Reserve is perhaps the most famous survey of economists on economic indicators. The news network CNN surveyed economists on their opinion of the expiring Bush Tax Cuts (Isidore 2010).

(27.) Robin Hanson describes the political and media outrage that followed from an attempt to implement prediction markets in the U.S. Department of Defense at


Akerson, C. L., M. Healy, and V. Romero. "Ingroup Bias and Self-Esteem: A Meta-Analysis." Personality and Social Psychology Review, 4(2), 2000, 157-73.

Alston, R., J. R. Kearl, and M. Vaughan. "Is There a Consensus among Economists in the 1990s?" American Economic Association: Papers and Proceedings, 82(2), 1992, 203-9.

Anselin, L. Spatial Econometrics: Methods and Models. New York: Kluwer Academic Publishers. 1988.

Armstrong, J. S. "Combining Forecasts," in Principles of Forecasting: A Handbook for Researchers and Practitioners, edited by J. Scott Armstrong. Norwell, MA: Kluwer Academic Publishers, 2001.

Avery, C., and J. Chevalier. "Identifying Investor Sentiment from Price Paths: The Case of Football Betting." Journal of Business, 72, 1999, 493-521.

Barkley, B. "When the White House Changes Party, Do Economists Change Their Tune on Budget Deficits?" Econ Journal Watch, 7(2), 2010, 119-56.

Bikhchandani, S., D. Hirshleiter, and I. Welch. "Learning from the Behavior of Others: Conformity, Fads. and Informational Cascades." The Journal of Economic Perspectives, 12(3), 1998, 151-70.

Blommestein, H. "Elimination of Circular Routes in Spatial Dynamic Regression Equations." Regional Science and Urban Economics, 13, 1985, 251-70.

Brown, B. W., and S. Maital. "What Do Economists Know? An Empirical Study of Experts' Expectations." Econometrica, 49(2). 1981, 491-504.

Bruce, R. S. "Group Judgments in the Field of Lifted Weights and Visual Discrimination." Journal of Psychology, I, 1936, 117-21.

Carey. J. "ESPN severs ties to coaches' poll." USA Today Online June 7, 2005. Accessed August 2, 2009.

Chase, W. G., and H. A. Simon. "Perception in Chess." Cognitive Psychology, 4, 1973, 55-81.

Chi, M. T. H., P. J. Feltovitch, and R. Glaser. "Categorization of Physics Problems by Experts and Novices." Cognitive Science, 5, 1981, 121-52.

Christensen-Szalanski, J. J. J., and C. F. Willham. "The Hindsight Bias: A Meta-Analysis." Organizational Behavior and Human Decision Processes, 48, 1991, 147-68.

Coleman, B. J., A. Gallo. P. Mason, and J. W. Steagall. "Voter Bias in the Associated Press College Football Poll." Journal of Sports Economics, 11(4), 2010, 397-417.

Crocker, J., and R. Luhtanen. "Collective Self-Esteem and Ingroup Bias." Journal of Personality and Social Psychology, 58(1), 1990, 60-67.

Fair, R. C., and J. F. Oster. "College Football Rankings and Market Efficiency." Journal of Sports Economics. 8(1). 2007, 3-18.

Frazier, J. A., and E. E. Snyder. "The Underdog Concept in Sport." Sociology of Sport Journal, 8(4), 1991, 380-8.

Fuchs, V. R., A. B. Krueger, and J. M. Poterba. "Economists' Views about Parameters, Values, and Policies: Survey Results in Labor and Public Economics." Journal of Economic Literature, 36(3), 1998, 1387-425.

Goff, B. "An Assessment of Path Dependence in Collective Decisions: Evidence from Football Polls." Applied Economics, 28, 1996, 291-97.

Goldschmied, N. P., and J. A. Vandello. "The Advantage of Disadvantage: Underdogs in the Political Arena." Basic & Applied Social Psychology, 31(1), 2009, 24-31.

Healy, P. J., S. Linardi, R. J. Lowery, and J. 0. Ledyard. "Prediction Markets: Alternative Mechanisms for Complex Environments with Few Traders." Management Science, 2010 Oct 11. [Epub ahead of print]

Henshel, R. L., and W. Johnston. "The Emergence of Bandwagon Effects: A Theory." The Sociological Quarterly, 28(4), 1987, 493-511.

Isidore, C. "Economists: Reform the Tax Code." CNNMoney.Com, December 22, 2010. Accessed January 5, 2011.

Janiszewski. C. "Preattentive Mete Exposure Effects." The Journal of Consumer Research, 20(3), 1993. 376-392.

Keren, G. "Facing Uncertainty in the Game of Bridge: A Calibration Study." Organizational Behavior and Human Decision Processes, 39, 1987, 98-114.

Kim, J., S. T. Allison, D. Eylon, G. R. Goethals, M. J. Markus, S. M. Hindle, and H. A. McGuire. "Rooting for (and Then Abandoning) the Underdog." Journal of Applied Social Psychology, 38( 10), 2008, 2550-73.

Larkin. J., J. McDermott, D. P. Simon, and H. A. Simon. "Expert and Novice Performance. in Solving Physics Problems," Science, 208, 1980, 1335-42.

Lebovic, J. H.. and L. Sigelman. "The Forecasting Accuracy and Determinants of Football Rankings." International Journal of Forecasting, 17, 2001, 105-20.

LeSage, J.. and R. K. Pace. Introduction to Spatial Econometrics. Boca Raton. FL: Taylor & Francis, Inc., 2009.

Levitt, S. D. "Why Are Gambling Markets Organized So Differently from Financial Markets?" Economic Journal, 114(495), 2004, 223-46.

Lorge, I., D. Fox. J. Davitz, and M. Brenner. "A Survey of Studies Contrasting the Quality of Group Performance and Individual Performance, 1920-1957." Psychological Bulletin, 55(6), 1958, 337-72.

Luckner, S. "Prediction Markets: Fundamentals, Key Design Elements. and Applications," in 21st Bled eConference eCollaboration, ed., Bled eConference eCollaboration Bled, Slovenia June 2008, pp. 236-47.

Lynn. F. M. "The Interplay of Science and Values in Assessing and Regulating Environmental Risks." Science. Technology. & Human Values, 11(2), 1986, 40-50.

McFadden, D. L. "The Measurement of Urban Travel Demand." Journal of Public Economics, 3, 1974. 303-28.

Mehrabian, A. "Effects of Poll Reports on Voter Preferences." Journal of Applied Social Psychology, 28(23), 1998, 2119-30.

Mirabile, M. Paul, and M. D. Witte. "Not So Fast. My Friend: Biases in College Football Polls." Journal of Sports Economics, 11(4), 2010, 443-55.

Morwitz, V. G., and C. Plunzinski. "Do Polls Reflect Opinions or Do Opinions Reflect Polls? The Impact of Polling on Voters' Expectations, Preferences, and Behavior." The Journal of Consumer Research, 23(1), 1996, 53-67.

Murphy, P. "Affiliation Bias and Expert Disagreement in Framing the Nicotine Addiction Debate." Science Technology Human Values, 26(3), 2001, 278-99.

Olsen, R. A. "Desirability Bias among Professional Investment Managers." Journal of Behavioral Decision Making, 10(1), 1997, 65-72.

Otway, EL, and D. von Winterfeldt. "Expert Judgment in Risk Analysis and Management: Process, Context, and Pitfalls." Risk Analysis, 12(1). 1992, 83-93.

Paul, R. J., A. P. Weinbach, and P. Coate. "Expectations and Voting in the NCAA Football Polls: The Wisdom of Point Spread Markets." Journal of Sports Economics, 8(4), August 2007, 412-24.

Poses, R. M.. and M. Anthony. "Availability, Wishful Thinking, and Physicians' Diagnostic Judgments for Patients with Suspected Bacteremia." Medical Decision Making, 11, 1991, 159-68.

Sauer. R. D. "The Economics of Wagering Markets." Journal of Economic Literature, 36, 1998. 2021-64.

Schreiber. L. A. "Geography Lesson: Breaking Down the Bias in ESPN's Coverage.", August 15, 2008. Accessed December 1. 2008.

Shanteau. J. "Competence in Experts: The Role of Task Characteristics." Organizational Behanior and Human Decision Processes, 53. 1991 251-66.

Sobel. R.. and NI. Ryan. "Unifying the Favorite-Longshot Bias with Other Market Anomalies." in Handbook Sports and Lottery Markets. edited by D. Hausch and W. Ziemba. Amsterdam: North Holland-Elsevier 2008. pp. 137-60.

Stone. D. F. "Testing Bayesian Updating with the AP 'Top 25." June 2009. Accessed April 29. 2011.

Sunstein. C. Infotopia: Hon Many Minds Produce Knowledge. New York: Oxford University Press. 2006.

Tajfel. H. "Social Psychology of Intergroup Relations." Annual Review of Psychology. 33. 1982, 1-39.

Taylor. T. personal communication July 16. 2009.

The Associated Press. "College Football Notebook: AP removes poll voter who mistakenly thought Sooners lost," THE Seattle Post-Inielligencer Online, November 16. 2006. Accessed July 17. 2008.

--. "BCS head: Colleges, not Congress, should run sport." Sporting News May 20. 2010. Accessed July 10. 2010.

Vandello, J. A.. N. P. Goldschmied. and D. A. R. Richards. "The Appeal of the Underdog." Personality and Social Psychology 'Julien?). 33(12), 2007. 1603-16.

Wever. S., and D. Aadland. "Herd Behavior and Underdogs in the NFL." University of Wyoming Department of Economics Working Paper 2010.

Whaples. R. "Do Economists Agree on Anything? Yes!" The Economists' Voice. 3(9), 2006. 1-6.

Wolfers. J., and E. Zitzewitz. "Interpreting Prediction Market Prices as Probabilities." Penn State University Wharton School Working Paper Series, 2007. Accessed December 15, 2008.

Zajonc. R. B. "Mere Exposure: A Gateway to the Subliminal." Current Directions in Psychological Science. 10(6). 2001. 224-28.


* The authors appreciate the help of Terry Taylor of the Associated Press. We are also thankful for helpful feedback from Matt Ryan, Brad Humphreys. three anonymous referees. and participants in the Research Seminar Series at SPFA-ILJB, as well as to Jesse Musgrave. Jennifer Pollitt, and Joseph Min Kim for their assistance in data collection. The authors are solely responsible for any errors.

Ross: School of Public & Environmental Affairs. Indiana University, Bloomington, IN 47405. Phone (812) 8567559, Fax (812) 855-7802, E-mail

Larson: School of Public & Environmental Affairs and Department of Political Science. Indiana University. Bloomington. IN 47405. Phone (740) 590-0844. Fax (812) 855-7802, E-mail

Wall: Department of Athletics, Virginia Commonwealth University. Richmond, VA 23284. Phone (804) 8287618. Fax (804) 227-6215. E-mail

doi: 10.1111/j.1465-7287.2011.00275.x
COPYRIGHT 2012 Western Economic Association International
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Ross, Justin M.; Larson, Sarah E.; Wall, Chad
Publication:Contemporary Economic Policy
Date:Oct 1, 2012
Previous Article:Consumption response to government transfers: behavioral motives revealed by savers and spenders.
Next Article:Industrial upgrade, employment shock, and land centralization in china.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |