# Quantifying the life cycle of scholarly articles across fields of economic research.

I. INTRODUCTIONScholarly articles are the coin of the realm in modern academia; they influence researchers' career paths, salaries, and reputations (Cole and Cole 1967; Ellison 2013; Gibson, Anderson, and Tressler 2014; Smith and Eysenck 2002), as well as their departments' and universities' rankings (Hazelkorn 2011; Zimmermann 2013). The question as to how to measure the impact or importance of an article is a topic of much debate (Meho 2007; Vucovich, Baker, and Smith 2008), but the appeal of having "unobtrusive measures that do not require the cooperation of a respondent and do not themselves contaminate the response (i.e., they are non-reactive)" (Smith

1981, 84) has made citation counts the de facto standard for measuring

scholarly articles' impact. The origin of citation counts dates back to Gross and Gross's (1927) seminal paper (1); since then, the adoption of citation counts as a tool for measuring articles' importance has been overwhelming. Technological advances have had an effect on the popularity of this practice. As in many other fields, the availability and use of detailed data have grown exponentially since the onset of the "data revolution" (Einav and Levin 2014) and have been leveraged in recent years by automated citation indexing services such as Cite-Seer (Giles, Bollacker, and Lawrence 1998) and Google Scholar (see Giles 2005), which collect large amounts of citation data and make it accessible to the general public free of charge.

Currently, citation counts are being used not only to measure the visibility, impact, and quality of articles but also to measure the performance of researchers, research laboratories, departments, academic journals and, to some extent, national science policies (e.g., Bayer and Folger 1966; Garfield 1972; King 2004; Narin 1976; Oppenheim 1995; Tijssen, Van Leeuwen, and Van Raan 2002). The influence that publishing has on the careers of scholars (especially young ones) is clearly reflected in the old mantra "publish or perish." (2) One could argue that the use of citation counts to evaluate scientific output has caused this phrase to fall short of the mark. Today, it is not just about publishing; it is about high impact-publishing (i.e., publications with substantial citation counts).

Although the value of objectively quantifying the importance of academic papers is evident, a great deal of criticism has been made of the practice of naively using citation analysis to compare the impact of different scholarly articles without taking into account other factors which may affect citation patterns (see Bornmann and Daniel 2008). Among these criticisms, a recurrent one focuses on "field-dependent factors," which refers to the fact that citation practices vary from one area of science to another (with the focus generally being on differences in citation practices between hard science and the social sciences). In some fields, recent literature is cited more frequently than in others (see, e.g., Peters and Van Raan 1994), and different fields may have different structural characteristics which can increase or decrease the probability of a paper being cited. (3) If these arguments hold true, then they should be considered when the performance of researchers, journals, or institutions is being assessed. One good example of field-dependent factors' relevance is how they might affect different journals' impact factors. (4) If a given field tends to cite newer papers, journals that publish papers dealing with that field will clearly benefit in terms of their impact factors, and researchers who are encouraged to publish in high-impact journals will have an incentive to focus their studies on subjects in that field.

What about economics? Economics as a discipline has not been exempt from these trends. Economic journals' impact factors and economic departments' rankings and tenure offerings are all influenced, to a greater or lesser degree, by the citation patterns of economic research articles (see, among others, Coupe 2003; Ellison 2013; Gibson, Anderson, and Tressler 2014; Hamermesh and Pfann 2012; Hamermesh, Johnson, and Weisbrod 1982; Hilmer, Hilmer, and Ransom 2012; Ruane and Tol 2008). Nevertheless, economics as a field of study is far from homogenous at many levels. One obvious source of heterogeneity in economics stems from the fact that it covers a large number of subjects (as reflected in the extensive range of topics covered by the Journal of Economic Literature [JEL] codes). Another less-studied source of heterogeneity in economics has to do with the methodological techniques used to address a particular subject. This is reflected in the fact that a given topic is often addressed by means of diametrically opposed methodological strategies (e.g., trough theoretical modeling and empirical analysis) or, to use the terms that we will employ to discuss this subject in this paper, addressed as corresponding to different fields of economic research. This is an important point because, as Hamermesh (2013) clearly states, "subject does not automatically imply method." (5)

The heterogeneity of economic research, the importance of citation counts, and the influence of field-dependent factors all raise thought-provoking questions: Are the life cycles of papers concerning different fields of economic research different? Should we distinguish among different fields of economics when evaluating economic research performance? In this article, we address these questions empirically. To this end, we construct a large dataset in which we first classify economic research articles into one of four fields of research (applied, applied theory, econometric methods, and theory). (6) Our sample of articles includes every research article published between 1970 and 2000 in the top five general-interest economics journals: the American Economic Review (AER), Econometrica {ECA), the Journal of Political Economy (JPE), the Quarterly Journal of Economics (QJE), and the Review of Economic Studies (RES). We then map the trends in citations across time for each of the articles. This allows us to construct a time series for every paper in our sample that we can then use to analyze how many citations a paper has received in each year since it was published. Our data and analysis suggest that papers have a clear-cut life cycle: after being published, they begin to be read and cited, the yearly citations of those articles then eventually reach a peak, after which the citations begin to decline, probably because newer papers take their place. Even more importantly, the data suggest that this cycle varies across papers from different research fields. Applied and applied theory papers are the clear winners in citation counts. In the first years following their publication, they receive a much higher number of citations than papers in the other categories, and the peak numbers of citations are more than double the number of citations for theory papers and appear to continue over a longer period of time. Citation patterns are much less favorable for theory papers, which have dramatically shorter lifespans and lower peaks than applied and applied theory papers. Econometric method papers are a special case; the pattern for the vast majority of these papers is similar to the pattern for theoretical papers, but there are a few very successful econometric method papers that have an extremely high number of citations and long lifespans.

This article contributes to a recently growing body of literature on quantitative economics and its evolution as viewed through the lens of the relevant papers' characteristics, their citation performance, and the journals' decisions about what to publish. However, to the best of our knowledge, our paper is the first to analyze the life cycles of economic research articles in different fields. The most closely related paper focusing on citation counts as a means of analyzing changes in top-rated economic journals is that of Card and DellaVigna (2013). In contrast to the approach taken in our paper, these authors use JEL codes to classify papers on the basis of the topics that they covered (regardless of the analytical method used in each paper) and then analyze their impact by looking at total citation counts (not the way in which the citation counts evolved in the years following publication). They report a rising aggregated trend in citations of more recent papers in the "Development" and "International" JEL fields and a declining aggregated trend in citations of recent papers in the "Econometric" and "Theory" JEL fields. Hamermesh (2013) studies changes in patterns of coauthorship, age structure, and methodology for papers published in three top journals {AER, JPE, and QJE) since the 1960s. As far as we have been able to determine, this paper is the only one that analyzes changes in the characteristics of articles across methodological fields (although the classifications used differ from ours), but it does not assess the trend in terms of the length of time that citations remain in a trough. Aizenman and Kletzer (2011) analyze the impact of the death of productive economists on the patterns of their citations. They work with a sample of 428 papers written by 16 well-known academic economists who died before they retired. In their analysis, they deflate citation trends for scholarly articles using an index which takes into account the volume of papers published in a given year compared to a base year. Beyond the scope of our paper, Ellison (2013) studies how modifications of Hirsch-like citation indexes (Hirsch 2005) align with labor-market outcomes for young tenured economists in 50 different U.S. college departments. (7) He adjusts these indexes for differences across 15 economic topics and finds that adjusted citation indexes do a fairly good job of accounting for labor-market outcomes. In line with our study of the life cycle of scholarly articles, Bjork, Offer, and Soderberg (2014) analyze the citation trajectories of Nobel Prize winners in economics from 1930 to 2005 using citation data from The Data for Research service of the JSTOR journal database. Their paper focuses on authors' citation trends over time rather than on articles' citation trends. According to their findings, trends can be described mathematically by means of the Bass model of diffusion of innovations, which yields a bell-shaped curve that provides a good fit for most of the citation trends for Nobel Prize winners. Chiappori and Levitt (2003) explore the question as to whether theoretical economic research succeeds in influencing the path of empirical microeconomic research. To this end, they use a database on empirical microeconomic papers published in the AER, the JPE, and the QJE between 1999 and 2001. They find that the set of theoretical papers cited as a primary motivation for empirical research projects is surprisingly diverse, with very few theoretical papers having much of an influence on applied microeconomic papers. They also find that empirical research appears to be heavily influenced by recent theoretical contributions, inasmuch as half of the citations to theoretical papers concern papers which are less than a decade old, even in cases where they are generally addressing older, traditional economic concepts. Similarly, by examining the ten leading "core" economics journals from 1987 to 1990, Stigler, Stigler, and Frieland (1995) find that the importance of general economic theory is manifest in the citation patterns of journals that have a strong empirical orientation. Finally, another noteworthy paper, although it is not part of the economic literature, is that of Abt (1996), which studies the half-life of 165 papers published in the Astrophysical Journal and Supplements in 1954.

The rest of this article is organized as follows: Section II describes how we built our dataset and presents descriptive statistics for the main variables. Section III covers our empirical analysis and main results. Section IV concludes.

II. DATA

We use data from two main sources: EconLit and Google Scholar. Using the data provided by EconLit, we list all articles published in the top five journals from 1970 to 2000 and provide each article's title, the name(s) of its author(s), its JEL codes, and publication information (pages, journal's name, and volume). Based on both the title of the paper and subsequent checks, we exclude documents which we identify as comments/replies, addresses/speeches, and corrections. Like Card and DellaVigna (2013), we also exclude articles in the Papers and Proceedings of the AER and classify papers published in EGA as "Notes and Comments." This leaves us with a final dataset of 9,672 full-length refereed articles. Additionally, we also follow the methodology used by Card and DellaVigna (2013) in order to classify each article's JEE codes into a consistent set of 14 major fields (details on this procedure are provided in Appendix S1).

The field of research corresponding to each paper is identified by skimming each article, as in Hamermesh (2013). We classify each paper into one and only one of the following research fields: applied, applied theory, econometric methods, and theory. The criteria used to assign a paper to a category are as follows: applied papers are papers that have an empirical or applied motivation. They rely on the use of econometric or statistical studies as a basis for analyzing empirical data, although they may deal with simple models that serve as a theoretical framework for the analysis. This category also includes papers which do not use sophisticated econometric methods, but do use descriptive statistics to analyze, for example, given features of an economy and in which the empirical section figures as the central element. Applied theory papers develop theoretical models to explain a fact; the empirical analysis is not the most important feature of the paper, but a supplement. In these papers, the use of econometric or statistical analyses is limited, although they may use simulations (even with empirical data) or refine other techniques to test the implications of the models. Econometric method papers are articles that develop econometric or statistical methodologies. They also include papers that develop methodologies for collecting data and that address issues of identification, data aggregation, or optimization techniques. Finally, theoretical papers do not contain an empirical fact section; they usually approach a topic by modeling and by making extensive use of formal mathematics and logic. They may include a numerical example or a simple model calibration with theoretical data to illustrate the proposed model or analyze its comparative statics. Appendix S1 provides a detailed overview of the main characteristics of the dataset and information on the way in which we have classified papers into these four categories.

Table 1 shows the distribution of articles across research categories for every JEL field. A large number of patterns that appear to justify our classification can be identified. Although a JEL field may be approached by means of different fields of research, one would expect an over-representation of theoretical papers in the "Theory" JEL field and of econometric method papers in the "Econometrics" JEL field, and this pattern is indeed evident. The high--but not extremely so--proportion of applied papers in the "Labor," "Health and urban economics," and "Lab-based experiments" JEL fields is also a reassuring sign of the soundness of this approach.

Table 2 shows the distribution of articles across fields of research and journals for all articles published in the top five journals from 1970 to 2000. As the reader will see, there is a strong specialization of journals in different fields of research, particularly ECA in theory and econometrics methods and RES in theory. For the case of AER, JPE, and QJE, the distribution seems to be more balanced across applied, applied theory, and theory articles, while papers dealing with econometric methods do not appear very often in these journals. Have these patterns been stable across time? Figure 1 shows that this is not the case. The method used to look into this question involves plotting the trend in the appearance of papers dealing with different fields of research in every journal and in all the top five journals as a group. The patterns that emerge are quite interesting. In particular, it is notable how applied papers have grown in importance since the beginning of the 1990s, whereas theory papers have done just the opposite. This shift has been particularly sharp in the case of the QJE, where applied papers have risen to prominence since the mid-1990s while edging out theory papers (this process actually started at the beginning of that decade). Another interesting (an encouraging) pattern that emerges is the shared trend seen over time in the case of the AER and the JPE, which are known to have similar audiences. Additionally, from Figure 1, it can be seen that these two are the journals mainly responsible for the growth seen in applied theory papers since the early 1990s. Finally, it is surprising how stable the participation of different fields of research has been in the RES, with roughly three-fourths of the papers published every year throughout the entire study period being theory papers.

Traditionally, scholarly citation data have been gathered from subscription-based scientific citation indexing services such as Thomson Reuters' Web of Science or Elsevier's Scopus, and a large body of literature based on data gathered from these sources--especially Web of Science--therefore exists. Recently, Google Scholar has also been adopted as a source of data for these types of studies. Nevertheless, comparing results derived from data obtained from different services is not straightforward, mainly because what is defined as "scholarly" varies a great deal from one service to the next. (8) Given that highly relevant articles for our research base their claims on data gathered from Google Scholar (e.g., Card and DellaVigna 2013; Ellison 2013) and that recent specialized literature is providing evidence in support of Google Scholar as a reliable source for bibliometric studies in the social sciences (see De Winter, Zadpoor, and Dodou 2014 and Harzing 2014), we chose Google Scholar as our citation data source. From Google Scholar, we collected detailed data on citations of each article for every year since its publication. This allows us not only to quantify an article's importance on the basis of its total number of Google Scholar citations but also to quantify the pattern of citations for all the articles over time. Data were retrieved from Google Scholar from the beginning of September 2014 to the end of October 2014 for each of the 9,672 scholarly articles in the dataset using web crawling and natural language processing techniques. For roughly 2.3% of all articles, citation data could not be identified by automatic means. In these cases, the identification was done manually, and web crawling techniques were used to collect the data. Further details on this procedure are available in Appendix S1. A few citations of articles in Google Scholar do not have a timestamp attached to them; we noted that these citations tend to have a low impact (i.e., they are associated with a null citation count or with nonformal scholarly documents), and we therefore decided to ignore the small subset of citations which do not have a timestamp.

Table 3 presents summary statistics for citation data at the article level across journals and fields of research for all articles published in the top five journals from 1970 to 2000. The skewness observed in the distribution of citation counts at the article level is noteworthy. (9) Theory is the predominant category and, collectively, papers dealing with this field of research account for more citations than every other research field category. But when one analyzes the "central" papers across research fields, the papers in the theory category are the least frequently cited ones of all the categories (as measured by median or average total citations). Econometric method papers display interesting patterns: the standard deviation of total citations for this category is almost twice as great as the other categories' standard deviations. This goes hand in hand with the fact that the median number of citations is low for this category (slightly higher than the median for theoretical papers) but, at larger quantiles, econometric methods begin to outperform other categories; this suggests that there are heterogeneous levels of success in this field of research.

III. RESULTS

A. Trends in Citation Patterns across Time and Fields of Research

We define "years since publication" as the difference between a given year and the year in which a paper was published. Our data allow us to quantify how many citations each paper received in each year since its publication; moreover, we can summarize citations across the years since publication for different fields of research. Figure 2 shows the number of cumulative citations received by the average paper in each field of research per year since publication. Given that our focus is to identify cycles in citations patterns as articles age, Figure 3, instead of showing stock values as in Figure 2, shows flow values by plotting the number of citations the average paper in each field of research received per year since publication (note that this last figure can be interpreted as the "derivative" from curves in Figure 2 with respect to the paper age). (10) For the purposes of this analysis, in both figures, we group publication dates into 5-year periods in order to reduce effects related to time-dependent factors (see Bornmann and Daniel 2008).

Figures 2 and 3 point up some interesting patterns. First, as can be seen from the variation in the y-axis ranges across panels in each figure, citations for papers in 1995-1999 are drastically more numerous than they are for 1970-1975, and this increase holds steady across all 5-year periods. We believe that this phenomenon is multicausal. First, given that the number of publications in peer-reviewed journals grew at a steady rate throughout almost all of the 20th century (see Larsen and von Ins 2010), it is natural to observe more citations across time, as more citation sources translate directly into more citation counts. Second, there is evidence that newer articles tend to cite more sources than older ones (Neff and Olden 2010 refer to this phenomenon as "citation inflation"), which would also translate directly into higher citation counts for more recent periods. Third, Google Scholar has its own artifacts, as it sometimes indexes informal academic documents retrieved from the web (e.g., working papers and lecture notes). Since these documents, when available online, tend to be fairly new, this would also lead to higher citation counts for more recent years. (11) Adopting the concept of "citation inflation" proposed by Neff and Olden (2010), but applying it to a more general phenomenon, we will use the expression "citation inflation" to refer to the observed rise in citation counts over time, regardless of the cause. In practical terms, this would mean that a paper published in 1970 that received ten citations in the first year after its publication is clearly more successful than a paper published in the year 2000 that received exactly the same number of citations during its first year after its publication. An accurate analysis of trends in citations should take this factor into account.

The second pattern that emerges from Figures 2 and 3 indicates that, while for the period 1970-1974, differences across fields of research do not seem to be very large (mainly because all articles tend to have low numbers of citations), as time has passed and the number of citations has risen, differences across fields have started to appear. An analysis of the trends in cumulative and mean citations shows that the 1980s was clearly the decade of successful econometric method papers, while the 1990s (especially the last half) was the decade of successful applied and applied theory papers. In general, other fields of research tend to predominate in the citation counts compared to theoretical papers. Our analysis uses a more sophisticated and technical approach to the identification of the life cycles of academic papers, but it must be said that Figure 3 provides extremely important information about how journals' impact factors are calculated. Note that, if journal impact factors are obtained by averaging the number of citations of given papers during the first 2 or 5 years after their publication, then the distribution of fields of research within a journal may affect its impact factor; hence, for example, journals that publish applied theory papers could be expected to have higher impact factors than journals that publish theoretical or econometric method papers.

One concern with our previous analysis is that, as citations are highly skewed, using simple averages to analyze the trends in citations across fields of research may be misleading. For this reason, Figure 4 reproduces the information provided in Figure 3 but based on median citation counts instead.

As expected, median citations are lower than mean citations. Note that the fact that citations increase as time passes still holds, but patterns across research fields differ from those seen in our previous analysis. When median citations are used, the performance of econometric method papers declines in relative terms compared to other fields for all periods; indeed, for most periods, econometric method papers share an almost identical citation pattern with theoretical papers. The patterns for median citations indicate that the 1980s was not a peak decade for econometric method papers. In fact, it seems that the only 5-year period during which the citation pattern for econometric method papers differs from the one observed for theoretical papers was 1980-1984. The medians cast both applied and applied theory papers in a more favorable light. The 1990s are still the decade of successful applied and applied theory papers, and there also seems to be evidence that, since the year 2000, applied papers are being cited disproportionately.

B. The Life Cycle of Economic Research Articles

It is reasonable to assume that most published papers have a life cycle: they are first published, then they begin to become known and cited, reach

a peak level of yearly citations and, then, eventually their number of citations starts to fall (probably because they are eclipsed by newer papers). The objective of this section is to identify this life cycle for economic research articles across fields of research.

In previous sections, we have discussed two important features of citations. First, skewness in the distribution of citations can make the analysis of simple averages misleading. Second, because citations have become more frequent over time, citation inflation makes it hard to determine if the ascending curves observed in Figures 3 and 4 are ascending because a paper is effectively becoming more important or just because citations are becoming more common. (12) To address both issues, we identify the life cycle of research articles through quantile regression (QR) analysis. One advantage of this strategy is that QR not only allows us to examine the life cycle of "typical" or "central" papers but also to encompass papers having different levels of conditional "success." Another advantage is that, by using regression analysis, we are able to control, among other effects, for citation inflation. We propose the following regression model:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [citations.sub.i,t]([tau]) stands for the citations received per year by papers; this is conditional on the papers' characteristics, which depend on i (the paper itself) and t (number of years since publication, which ranges from -2 to 20). [tau] stands for the quantile of the distribution of the error term e. The set RF elements represent applied, applied theory, and econometric method research fields (note that this specification leaves theory as our base category for fields of research). [I.sub.i,r] is an indicator variable that takes the value 1 if paper i belongs to the field of research r and 0 if not. The set S contains integers ranging from -2 to -1 and from 1 to 20, which represent the number of years since publication (note that this specification leaves s = 0 as our base category for years since publication). [I.sub.t,s] is an indicator variable that takes the value 1 if s equals t and 0 if not. (13) The set Y contains integers ranging from 1971 to 2014, which represent the calendar year in which citations were received (note that this specification leaves y = 1970 as our base category). [I.sub.i,t,y] is another indicator variable that takes the value 1 if year y is the year when citations of paper i after t years of being published were generated and 0 if not, which makes it possible to control for secular trends in citations (including citation inflation). (14) [X.sub.i] is a vector of the characteristics of paper i that includes journal-of-publication dummies. (15) Finally, [[epsilon].sub.i,t]([tau]) is an error term. The life cycle of papers from category r can be identified by looking at the trend of values obtained for [alpha]([tau]) + [[beta].sub.t]([tau]) + [I.sub.i,r] x ([[alpha].sub.r]([tau]) + [[beta].sub.r,t]([tau])) for different values of t (in the case of theory, our base category, it is sufficient to look at the trend of values obtained for [alpha]([tau]) + [[beta].sub.t]([tau])).

Figure 5 shows the life cycle of economics papers across fields of research. (16) In all sub-figures, we plot 5-year centered moving averages of the results obtained for [alpha]([tau]) + [[beta].sub.t]([tau]) + [I.sub.i,r] x ([[alpha].sub.r]([tau]) + [[beta].sub.t,s]([tau])) for each field of research and for the years since publication (t) between -2 and 20. (17) In order to facilitate comparisons with previous results, the first sub-figure plots the results arrived at by using the ordinary least squares (OLS) estimator; the remaining subfigures plot the results obtained for different values of [tau]. Table 4 details the values obtained for [alpha]([tau]) and all [[beta].sub.t]([tau]), [[alpha].sub.r]([tau]) and [[beta].sub.t,s]([tau]) in regressions for [tau] equal to 0.50 and 0.85 along with their estimated bootstrapped standard error. (18)

When comparing the patterns observed in Figures 3 and 4 with the results shown in Figure 5, the decline in citations obtained from QR analysis with x equal to 0.50 versus OLS (both of which are estimates of a central tendency of the distribution of the conditional outcome variable) is less striking than the one observed when comparing Figure 4 to Figure 3. Additionally, even after all the controls are added, the influence of exceptionally successful econometric method papers remains when analyzing conditional means using OLS estimates; this is reflected in the fact that the estimated curve for econometric method papers under OLS analysis never descends. (19)

One feature that emerges from an examination of Figure 5 is that economic research articles effectively have a life cycle. For almost every estimated curve, they begin their life with a low number of citations per year. That number then rises over a given period of time until it reaches a peak. Thereafter, papers begin to decline in importance as measured by yearly citation counts. Median papers reach their peak between around 3 and 5 years after their publication. Ten years after being published, the median paper for every field of research receives negligible levels of citations per year. This result is in keeping with Chiappori and Levitt's (2003) findings, in which they state that theoretical papers cited by empirical microeconomic research papers tend to be less than a decade old; however, our results indicate that, regardless of the field of research under analysis, most papers will receive citations mainly during their first decade of existence and almost no citations thereafter. (20) The case of [tau] = 0.85 differs, although the peak for these successful papers seems to be found around the same time or slightly later. One first difference in citation patterns across fields is observed: only theoretical and econometric method papers receive low numbers of citations 10 years after their publication; applied and applied theory papers receive significant levels of citations for more than 15 years following their publication.

Focusing on the differences across fields of research, it can be seen that theoretical papers are, in general, cited the least often (a feature also exhibited in Figures 3 and 4), although the performance of econometric method papers in this respect is almost identical to the performance of theoretical papers. Differences are observed between these two fields of research for [tau] = 0.85, but they are not as striking as the ones observed when using OLS. One interesting feature of econometrics method papers for [tau] = 0.85 is that these papers age relatively well: from 15 years since publication onward, their citation levels behave almost the same way as those of applied papers, even though applied papers reach a much higher citation peak than econometric method papers do. An almost non-descending curve for econometric method papers is observed for [tau] = 0.95; this suggests that econometric method papers' life cycles are very heterogeneous across quantiles: most papers have a modest life cycle, but the most successful ones (the top 5% in terms of annual citations) are exceptional not only in terms of their own field of research but also in relation to economics papers as a whole. In other words, the top 5% of econometric method papers receive as much as or more citations per year than the top 5% of the papers in all the other categories. This last result is in line with Stigler's (1994) findings, which indicate that a small investment in statistical theory can reap much greater rewards in term of the number of citations.

Applied and applied theory papers are the clear winners in terms of citation counts: (1) during their first years of life following publication, they receive higher numbers of citations than the papers in the other categories; and (2) they reach a higher peak (more than twice as high as the peak for theoretical papers), and that peak level seems to last longer. Both patterns are observed across different values of [tau] (except for extremely high values as 0.95, where econometric method papers excel). We observe no remarkable difference between these two categories for [tau] = 0.50. Differences are observed for higher values of [tau], where applied theory papers seem to perform better at high values in terms of the number of years since publication (meaning that they have a longer life cycle), but an almost identical performance is observed when these papers are new (remember that these are the years that matter for journal impact factor calculations). It should be noted that these results go hand in hand with Card and DellaVigna's (2013) documented exceptional performance of the QJE during the 1990s; it is plausible that this may be attributable to the fact that, during that period of time, the QJE shifted its focus from mainly theoretical articles to applied papers (see Figure 1) and that, as suggested by our results, is translated directly into a higher number of citations. (21)

Finally, in order to provide a clearer picture of the implications of our results, Figure 6 shows an aggregate overview of the estimated coefficients plotted in Figure 5. It should be noted that these curves can be interpreted as trends in the number of total citations received by articles belonging to different fields of research as time passes. (22) As can be seen, the differences across fields of research are dramatic. The mean or median theory paper consistently gamers between one-third and one-fourth as many citations as the typical applied paper. Once again, econometric method papers do not do well when conditional median curves are analyzed, but at higher-order conditional quantiles, their performance converges with the performance of applied and applied theory papers. An interesting pattern which was not evident from the analyses of previous figures is that extremely successful theory papers, although still considerably less successful than equivalently successful articles in other fields, seem to do comparatively better than the median or mean theory paper compared to the median or mean paper from applied or applied theory fields. This is reflected in the fact that our estimated life cycle for theory articles at [tau] = 0.95 shows that they garner between two-thirds and one-half the number of citations of the citations received by the rest of the categories.

IV. DISCUSSION AND CONCLUSION

Economic research is a heterogeneous discipline. It covers a wide range of topics and uses diverse research and analytical methodologies. In this paper, we add and quantify another dimension of heterogeneity, since we find that the extent to which economic research articles are cited varies enormously across different fields of economic research. By constructing and analyzing a large dataset that combines information on all articles published in the top five journals with their yearly Google Scholar citations, we are able to identify the life cycle of economic research articles across different fields of research.

Even though citation counts are an extremely valuable tool for measuring academic articles' importance, we believe that the patterns observed for the life cycles of papers across fields of research support the "field-dependent factors" critique in economics. We find that citation patterns are much more favorable for applied and applied theory papers than for theoretical papers: (1) they have a longer life cycle, (2) they receive more citations per year and have a higher peak level, and (3) their performance during their first years after publications surpasses that of theoretical and econometric method papers. In the case of the median papers, we observe no sharp difference between applied and applied theory papers. In the case of frequently cited papers, however, applied theory papers tend to outperform applied papers once 5 years have passed since their publication, and they thus have a longer life span. Econometric method papers are a special case. The citation patterns for most of these papers are similar to the patterns for theoretical papers, that is, they have short life cycles and their high points are lower than those of applied and applied theory papers, but the most successful 5% of econometric method papers are much more successful than the top 5% of papers in any other field of research. This exceptional performance does not seem to be associated with high peak levels, but instead with the fact that they "age well," that is, after their peak, the descending slope of their life cycle is very gradual.

Our evidence seems to provide a basis for a caveat regarding the use of citation counts as a "one-size-fits-all" yardstick to measure research outcomes in economics across fields of research, as the incentives generated by their use can be detrimental for fields of research which effectively generate valuable (but perhaps more specialized) knowledge, not only in economics but in other disciplines as well. Our results suggest that field-dependent factors should be taken into account not only when comparing across disciplines but also when comparing across fields of research within a given discipline, at least in economics. According to our findings, pure theoretical economic research is the clear loser in terms of citation counts. Therefore, if specialized journals' impact factors are calculated solely on the basis of citations during the first years after an article's publication, then theoretical research will clearly not be attractive to departments, universities, or journals that are trying to improve their rankings or to researchers who use their citation records when applying for better university positions or for grants. The opposite is true for applied papers and applied theory papers: these fields of research are the outright winners when citation counts are used as a measurement of articles' importance, and their citation patterns over time are highly attractive for all concerned. Econometric method papers are a special case; their citation patterns vary a great deal across different levels of "success." But, since publishing an extremely successful paper is by no means an easy task, econometric research may also lose out if citation counts are relied on for purposes of assessment. (23)

An analysis of the reasons for these patterns is beyond the scope of this paper, but a number of hypotheses can be developed. One intriguing fact that might shed some light on the subject is that, as can be seen from Table 3, theory is the most frequent field of research in our dataset; hence, if there are more theoretical papers being circulated, and we assume a certain degree of homophily in the citation network (papers from research field r having a tendency to cite more papers from research field r), then we would expect to observe, at the least, a non-negligible number of extremely successful theoretical papers. Yet our analysis shows that even the top 5% of theoretical papers do not achieve an exceptional level of performance compared to other categories. One highly plausible hypothesis that may explain what is happening is that applied papers, applied theory papers, and extremely successful econometric method papers are much more likely to transcend the frontiers of their own fields of research and even of economics as a whole. In other words, compelling facts or findings described in applied or applied theory papers may be studied in other branches of the social sciences and even in more distant disciplines such as psychophysics. Methodologies developed in econometric method papers may be applied across different disciplines, or even studied in fields not related directly to the social sciences at all, such as mathematics and mathematical statistics. Quantifying this phenomenon falls outside the scope of this paper, but a preliminary effort to determine which kinds of papers are the successful ones in each field of research seems to point in this direction.

doi: 10.1111/ecin.12292

Online Early publication November 4, 2015

ABBREVIATIONS

AER: American Economic Review

ECA: Econometrica

JEL: Journal of Economic Literature

JPE: Journal of Political Economy

OLS: Ordinary Least Squares

QJE: Quarterly Journal of Economics

QR: Quantile Regression

RES: Review of Economic Studies

REFERENCES

Abt, H. A. "How Long Are Astronomical Papers Remembered?" Publications of the Astronomical Society of the Pacific, 108, 1996, 1059-61.

Aizenman, J., and K. Kletzer. "The Life Cycle of Scholars and Papers in Economics--The 'Citation Death Tax'." Applied Economics, 43(27), 2011, 4135-48.

Bayer, A. E., and J. Folger. "Some Correlates of a Citation Measure of Productivity in Science." Sociology of Education, 39, 1966, 381-90.

Bjork, S., A. Offer, and G. Soderberg. "Time Series Citation Data: The Nobel Prize in Economics." Scientometrics, 98(1), 2014, 185-96.

Bornmann, L., and H. D. Daniel. "What Do Citation Counts Measure? A Review of Studies on Citing Behavior." Journal of Documentation, 64(1), 2008, 45-80.

Card, D. E., and S. DellaVigna. "Nine Facts about Top Journals in Economics." Journal of Economic Literature, 51(1), 2013, 144-61.

Chiappori, P. A., and S. D. Levitt. "An Examination of the Influence of Theory and Individual Theorists on Empirical Research in Microeconomics." American Economic Review, 93, 2003, 151-55.

Cole, S., and J. R. Cole. "Scientific Output and Recognition--Study in Operation of Reward System in Science." American Sociological Review, 32, 1967, 377-90.

Coupd, T. "Revealed Performances: Worldwide Rankings of Economists and Economics Departments, 1990-2000." Journal of the European Economic Association, 1(6), 2003, 1309-45.

De Winter, J. C., A. A. Zadpoor, and D. Dodou. "The Expansion of Google Scholar versus Web of Science: A Longitudinal Study." Scientometrics, 98(2), 2014, 1547-65.

Einav, L., and J. Levin. "The Data Revolution and Economic Analysis." Innovation Policy and the Economy, 14(1), 2014, 1-24.

Ellison, G. "How Does the Market Use Citation Data? The Hirsch Index in Economics." American Economic Journal: Applied Economics, 5(3), 2013, 63-90.

Garfield, E. "Citation Analysis as a Tool in Journal Evaluation." American Association for the Advancement of Science, 178, 1972, 471-79.

--. "What Is the Primordial Reference for the Phrase 'Publish or Perish'?" The Scientist, 10(12), 1996, 10-11.

Gibson, J., D. L. Anderson, and J. Tressler. "Which Journal Rankings Best Explain Academic Salaries? Evidence from The University of California." Economic Inquiry, 52, 2014, 1322-40.

Giles, C. L., K. D. Bollacker, and S. Lawrence. "CiteSeer: An Automatic Citation Indexing System," Proceedings of the Third ACM Conference on Digital Libraries, ACM, Pittsburgh, PA, June 23-26, 1998, 89-98.

Giles, J. "Science in the Web Age: Start Your Engines." Nature, 438(7068), 2005, 554-5.

Gross, P. L. K., and E. M. Gross. "College Libraries and Chemical Education." Science, LXVI(1713), 1927, 385-9.

Hamermesh, D. S. "Six Decades of Top Economics Publishing: Who and How?" Journal of Economic Literature, 51(1), 2013, 162-72.

Hamermesh, D. S., and G. A. Pfann. "Reputation and Earnings: The Roles of Quality and Quantity in Academe." Economic Inquiry, 50(1), 2012, 1-16.

Hamermesh, D. S., G. E. Johnson, and B. A. Weisbrod. "Scholarship, Citations and Salaries: Economic Rewards in Economics." Southern Economic Journal, 49, 1982, 472-81.

Harzing, A. W. "A Longitudinal Study of Google Scholar Coverage between 2012 and 2013." Scientometrics, 98(1), 2014, 565-75.

Hazelkom, E. Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence. New York: Palgrave Macmillan, 2011.

Hilmer, C. E., M. J. Hilmer, and M. R. Ransom. "Fame and the Fortune of Academic Economists: How the Market Rewards Influential Research in Economics." Discussion Paper Series No. 6960, Forschungsinstitut zur Zukunft der Arbeit, 2012.

Hirsch, J. E. "An Index to Quantify an Individual's Scientific Research Output." Proceedings of the National Academy of Sciences, 102(46), 2005, 16569-72.

King, D. A. "The Scientific Impact of Nations." Nature, 430(6997), 2004, 311-16.

Koenker, R. "Quantreg: Quantile Regression." R Package Version 5.05, 2013. Accessed October 15, 2014. http:// CRAN.R-project.org/package=quantreg.

Larsen, P. O., and M. von Ins. "The Rate of Growth in Scientific Publication and the Decline in Coverage Provided by Science Citation Index." Scientometrics, 84(3), 2010, 575-603.

Meho, L. I. "The Rise and Rise of Citation Analysis." Physics World, 20(1), 2007, 32-36.

Narin, F. Evaluative Bibliometrics: The Use of Publication and Citation Analysis in the Evaluation of Scientific Activity. Cherry Hill, NJ: Computer Horizons, 1976.

Neff, B. D., and J. D. Olden. "Not So Fast: Inflation in Impact Factors Contributes to Apparent Improvements in Journal Quality." BioScience, 60(6), 2010, 455-59.

Oppenheim, C. "The Correlation between Citation Counts and the 1992 Research Assessment Exercise Ratings for British Library and Information Science University Departments." Journal of Documentation, 51(1), 1995, 18-27.

Peters, H. P. F" and A. F. Van Raan. "On Determinants of Citation Scores: A Case Study in Chemical Engineering." Journal of the American Society for Information Science, 45(1), 1994, 39-49.

Portnoy, S., and R. Koenker. "The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error versus Absolute-Error Estimators." Statistical Science, 2(4), 1997, 279-300.

Redner, S. "How Popular Is Your Paper? An Empirical Study of the Citation Distribution." The European Physical Journal B-Condensed Matter and Complex Systems, 4(2), 1998, 131-34.

Ruane, F., and R. Tol. "Rational (Successive) H-Indices: An Application to Economics in the Republic of Ireland." Scientometrics, 75(2), 2008, 395-405.

Seglen, P. O. "The Skewness of Science." Journal of the American Society for Information Science, 43(9), 1992, 628-38.

Smith, A., and M. Eysenck. The Correlation between RAE Ratings and Citation Counts in Psychology. London, UK: Department of Psychology, Royal Holloway, University of London, 2002.

Smith, L. C. "Citation Analysis." Library Trends, 30, 1981, 83-106.

Stern, D. I. "Uncertainty Measures for Economics Journal Impact Factors." Journal of Economic Literature, 51(1), 2013, 173-89.

Stigler, G. J., S. M. Stigler, and C. Frieland. "The Journals of Economics." Journal of Political Economy, 103(2), 1995, 331-59.

Stigler, S. M. "Citation Patterns in the Journals of Statistics and Probability." Statistical Science, 1, 1994, 94-108.

Tijssen, R. J. W., T. N. Van Leeuwen, and A. F. J. Van Raan. Mapping the Scientific Performance of German Medical Research. An International Comparative Bibliometric Study. Stuttgart, Germany: Schattauer Verla, 2002.

Vucovich, L. A., J. B. Baker, and J. T. Smith Jr. "Analyzing the Impact of an Author's Publications." Journal of the Medical Library Association, 96(1), 2008, 63-66.

Zimmermann, C. "Academic Rankings with RePEc." Econometrics, 1(3), 2013, 249-80.

SUPPORTING INFORMATION

Additional Supporting Information can be found in the online version of this article:

Appendix S1. Supporting online material

Anauati: Ph.D. Student, Departamento de Economia, Universidad de San Andres, Buenos Aires, Argentina. Phone 54-11-4725-7041, Fax 54-11-47257010, E-mail victoria.anauati@gmail.com

Galiani: Professor, Department of Economics, University of Maryland, College Park, MD 20742; NBER Cambridge, MA. Phone 301-405-3518, Fax 301-405-3518, E-mail galiani@econ.umd.edu

Galvez: Ph.D. Student, Departamento de Computacion, FCEyN, Universidad de Buenos Aires, Buenos Aires, Argentina. Phone 54-11 -45763390, Fax 54-11-45763359, E-mail ramirogalvez@gmail.com

(1.) In this paper, the authors argue that the analysis of references in a single volume of the Journal of the American Chemical Society could be used to guide the book and journal purchases of a college library in order to better prepare students for advanced work and to act as a stimulus for the academic writings of the faculty.

(2.) Garfield (1996) states that the phrase "publish or perish" presumably dates back to the first half of the 20th century.

(3.) For example, taking into account the fact that fewer articles tend to be published in specialized fields than in general ones, it is argued that, as the probability of being cited is presumed to be positively related to the number of publications in a field, papers dealing with small or specialized fields receive fewer citations by virtue of that very fact (see Bornmann and Daniel 2008, 46).

(4.) For any given year, the n-year impact factor of a journal is defined as the average number of citations received per paper published in that journal during the n preceding years. A journal's impact factor is widely regarded as a useful ranking of journal quality and is used extensively by leading journals in their advertising. In addition, impact factors are considered to be the universal yardstick by which journals are judged; as such, they are often used to evaluate individual scientists or research groups as well (Neff and Olden 2010; Stem 2013).

(5.) It could even be argued that a researcher requires less training in order to be able to migrate from one topic to another while using the same research methodology than a researcher needs to study the same topic using different research methodologies.

(6.) It might seem that JEL classification codes are sufficient and that our system provides no added value in terms of understanding citation patterns in economics. We believe this is not the case. In order to clarify why, an example may be helpful. Let us take two coetaneous and successful papers in our sample that are similar in terms of JEL codes but come from different fields of economic research: AER's "Rents, competition, and corruption," by Alberto Ades and Rafael Di Telia, and RES's "A theory of collective reputations (with applications to the persistence of corruption and to firm quality)," by lean Tirole. They both share the "L" JEL code (which stands for industrial organization), were published in the second half of the 1990s, deal with topics related to corruption and, based on their Google Scholar citations, could be regarded as extremely successful papers. The former is an applied paper which contributes to a better understanding of a specific phenomenon, and it does so mainly by relying on empirical data analysis; by data collection time it had received 1,468 Google Scholar citations. The latter is a pure theory paper which relies heavily on the use of assumptions and propositions as means for developing a complex model designed to provide new insights about the expected behavior of agents; by data collection time, it had received 931 Google Scholar citations. Even though the two papers have a great deal in common in terms of JEL codes and the topics that they cover, they differ markedly in palpable ways in terms of both methodology and structure. We believe our classification captures these features better than one based entirely on JEL codes, as it reflects additional information concerning potential audiences and possible sources of citations.

(7.) Hirch's h index is closely related to citation counts; it is defined as the largest number h such that the researcher has at least h papers with h or more citations.

(8.) See De Winter, Zadpoor, and Dodou (2014).

(9.) Skewness in distribution of citation counts has been reported in Card and DellaVigna (2013), Redner (1998), and Seglen (1992), among others.

(10.) Formally, if [citations.sub.i,t] is the number of citations paper i received after r years since publication, r is the set of papers from a particular field of research (for each 5-year period) and [n.sub.r] the number of papers in field of research r (for each 5-year period); Figure 2 shows for papers from each 5-year period and each field of research the evolution of [[summation].sub.l [less than or equal to] t] [[summation].sub.i [member of] [citations.sub.i,l] [n.sub.r] for successive values of t, while Figure 3 shows for papers from each 5-year period and each field of research the evolution of [[summation].sub.i [member of] t] [citations.sub.i,t]/[n.sub.r] for successive values of t. We start our analysis 2 years before publication, as Google Scholar associates citations of working paper versions with their published versions.

(11.) It should be noted that the fact that Google Scholar acquires data by crawling the web could explain the drop observed in Figure 3 in the number of citations for the period 1995-1999 for distant years since publication. Those years are close to data collection time, and as documents just released (and their citations) will take time to be found and indexed, citations will take time to be counted. One would expect this phenomenon to be more marked in the case of informal documents.

(12.) It should be noted that Aizenman and Kletzer (2011) tackle this situation by deflating yearly citations by an index which only takes into account the rise seen across time in the total number of publications. We believe that the approach we use in this paper is more general than the one they used, as it captures citation inflation regardless of its cause. This should not be taken as a critique of Aizenman's and Kletzer's (2011) methodology, since we are able to control for this effect using extra coefficients in our regression analysis mainly because of the large size of our sample of papers, which was not the case for their study (nor their objective).

(13.) For example, suppose we are analyzing an article 10 years after its publication (t is equal to 10). In this case [I.sub.t,s] equals 1 if and only if s equals 10, thereby neutralizing the effects of any coefficient [[beta].sub.s]([tau]) or [[beta].sub.r.s]([tau]) other than [[beta].sub.10]([tau]) and [[beta].sub.10] ([tau]).

(14.) For example, suppose we are analyzing an article i published in 1990, 10 years after its publication (t is equal to 10). In this case, [I.sub.i,t,y] equals 1 if and only if y is equal to 2000, thereby neutralizing the effect of any [[gamma].sub.y]([tau]) other than [[gamma].sub.2000]]([tau]). [[gamma].sub.2000]]([tau]) captures the extra citations received by paper i after t years of publication because those citations were generated in the year 2000 relative to the citations generated in 1970 (the base category).

(15.) As some fields of research are correlated with JEL fields (a fact that can be seen in Table 1), we checked robust ness of our results by adding JEL field dummies to [X.sub.i] in order to control for possible confounding factors. All results remain qualitatively unchanged, with the exception that the econometric method papers at [tau] = 0.95 show an even more outstanding performance in terms of yearly citations (note that this further emphasizes our finding of strong heterogeneity in this field of research). These results are provided in the Supporting Information to this paper.

(16.) Results from Figure 5 are based on papers published in the top five journals between 1970 and 2000. This means that, for the set of papers published after 1994, no citation data are yet available for large values of t. We checked the robustness of our results by examining only papers published between 1970 and 1994, and no differences in their qualitative features were found.

(17.) It should be noted that we are plotting a subset of all the estimated coefficients in our model, that is, the ones which reflect how citations of an article evolve as time goes by. However, other coefficients which shift the estimated curves up or down are present in our model and are not represented in these sub-figures. For that reason, seeing negative numbers for the plotted curves at dates that come long after a given publication date does not mean necessarily that our full model predicts negative values for all articles. Additionally, as the number of citations that articles receive 20 years after their publication varies much more than the number of citations for years close to the publication date does, estimates for more distant dates should be viewed with greater caution, because they may be less precise.

(18.) Given the large size of the resulting design matrix (217,349 rows and 142 columns), we follow Portnoy and Koenker (1997) and fit our models using the Frisch-Newton interior point method as implemented in Koenker (2013).

(19.) This pattern is not surprising in the light of our previous findings, which show that the distribution of citations for econometric method papers is particularly skewed to the right. If anything, the "odd" behavior observed in the estimated coefficients further supports our strategy of analyzing citation patterns using quantile regression analyses. Furthermore, it should be re-emphasized that the whole purpose of this sub-figure is to contribute to a better understanding of this strange behavior and to compare it with our previous findings. Our main results and the identification of the life cycle of scholarly articles are derived entirely from quantile regression analyses.

(20.) Of course, we cannot rule out the possibility that citation inflation may lead to an increase in the number of citations of a given paper, but it is quite likely that these citations will not be due to the paper's quality, but rather simply to changing patterns in citations in general.

(21.) Furthermore, given that JEL fields do not necessarily translate into a specific field of research (see Table 1), this could also explain why the inclusion of JEL field dummies does not impact on the journal x cohort effects estimated by these authors.

(22.) Table 3 and Figure 6 are strongly related, as they both show accumulated values for citations. However, two points should be made. First, Table 3 shows the cumulative values of citations for all the years since publication (i.e., for an article published in 1970, the data provide cumulative measurements for citations over nearly half a century), while Figure 6 only shows cumulative measurements for citations for up to 15 years since the publication date. Second, Figure 6 shows a citations value that has been "deflated," while Table 3 just shows a cumulative citations value that does not control for citation inflation. Taking this into account, it is not surprising to see that the Table 3 values are considerably higher than the Figure 6 estimations.

(23.) Theory papers also seem to do comparatively better when exceptionally successful papers are analyzed, but this improvement effect is milder that it is for econometric method papers. Additionally, as theory is by far the more popular field of research in terms of number of articles, it could be the case that writing an extremely successful theory paper could be even harder to accomplish than in other fields of research because there is more competition.

TABLE 1 Distribution of Articles in Different Fields of Research across JEL Fields Applied Applied Econometric Papers, Theory, Methods, Theory, JEL Field % % % % Microeconomics 16.0 8.9 2.3 72.9 Theory 6.7 4.4 0.8 88.1 Macroeconomics 31.9 15.8 4.3 48.0 Labor 47.3 15.2 2.3 35.2 Econometrics 2.8 2.5 83.1 11.5 Industrial 39.8 15.7 1.0 43.5 organization International 18.9 11.3 1.4 68.4 Finance 34.4 13.5 2.4 49.6 Public 27.0 13.7 0.9 58.4 economics Health and urban 46.0 11.9 2.1 40.0 economics Development 36.5 15.8 0.9 46.8 History 78.2 10.7 0.0 11.2 Lab-based 88.4 4.7 0.0 7.0 experiments Other 28.9 11.6 1.6 58.0 TABLE 2 Distribution of Articles across Different Fields of Research and Journals Fields of All Research AER ECA JPE QJE RES Journals Applied 892 160 810 467 101 2,430 Applied theory 329 142 277 135 113 996 Econometric 46 770 14 11 133 974 methods Theory 1,252 1,217 919 804 1,080 5,272 All fields 2,519 2,289 2,020 1,417 1,427 9,672 of research TABLE 3 Summary Statistics of Citation Data at the Article Level across Journals and Fields of Research Quantile Quantile Median 0.75 0.95 Mean Journal AER 108 301 1,241 306.01 ECA 80 228 1.230 334.29 JPE 106 300 1,429 368.66 QJE 93 333 1,346 357.20 RES 60 163 658 183.15 Research field Applied 139 354 1,356 360.18 Applied theory 134 361 1,455 376.01 Econometric methods 77 232 1,720 423.60 Theory 67 206 1,129 262.88 All 89 263 1,242 315.16 Standard Most Total Deviation Cited Citations Articles Journal AER 683.54 12.906 770,841 2,519 ECA 1,268.65 29,636 765,188 2,289 JPE 1,133.22 25,046 744,702 2,020 QJE 970.44 19,389 506,155 1,417 RES 543.01 14,551 261,349 1,427 Research field Applied 910.20 25,046 875,227 2,430 Applied theory 874.06 17,267 374,508 996 Econometric methods 1,618.43 23,110 412,587 974 Theory 858.45 29,636 1,385,913 5,272 All 977.46 29,636 3,048,235 9,672 TABLE 4 Quantile Regression Results Obtained for [tau] = 0.50 and [tau] = 0.85 Base Category Theory Constant 0.00 (0.00) t=-2 0.00 (0.00) t=-1 0.00 (0.00) t=1 1.00 *** (20.10) t=2 2.00 *** (4.17) t=3 2.00 *** (17.19) t=4 2.00 *** (22.28) t=5 2.00 *** (4.29) t=6 1.00 *** (3.04) t-1 1.00 *** (25.13) t=8 1.00 *** (20.10) t=9 1.00 *** (19.26) t=10 1.00 *** (12.10) t=11 1.00 ** (2.52) t=12 0.00 (0.00) t=13 0.00 (0.00) t=14 0.00 (0.00) t=15 0.00 (0.00) t=16 0.00 (0.00) t=17 -1.00 *** (3.95) t=18 -1.00 *** (20.10) t=19 -1.00 *** (16.35) t=20 -1.00 *** (9.52) [tau] = 0.5 Interactions Applied Papers Applied Theory Econ. Methods Constant 1.00 ** (2.07) 0.00 (0.00) 0.00 (0.00) t=-2 -1.00 ** (2.07) 0.00 (0.00) 0.00 (0.00) t=-1 -1.00 ** (2.07) 0.00 (0.00) 0.00 (0.00) t=1 0.00 (0.00) 1.00 ** (2.14) 0.00 (0.00) t=2 0.00 (0.00) 1.00 (1.37) 0.00 (0.00) t=3 0.00 (0.00) 2.00 *** (3.42) 0.00 (0.00) t=4 1.00 * (1.80) 2.00 *** (3.36) 0.00 (0.00) t=5 1.00 (1.26) 2.00 *** (2.75) 0.00 (0.00) t=6 2.00 *** (3.05) 3.00 *** (4.42) 1.00 *** (3.01) t-1 1.66 *** (2.66) 2.00 *** (3.10) 1.00 *** (4.24) t=8 1.00 * (1.97) 2.00 *** (3.55) 1.00 ** (2.04) t=9 1.00 ** (2.09) 2.00 *** (3.66) 0.00 (0.00) t=10 1.00 * (1.84) 2.00 *** (3.09) 0.00 (0.00) t=11 0.00 (0.00) 1.00 (1.43) 0.00 (0.00) t=12 1.00 * (1.75) 2.00 *** (3.16) 1.00 ** (2.09) t=13 1.00 ** (2.02) 1.00 (1.52) 0.00 (0.00) t=14 1.00 (1.59) 1.00 * (1.87) 0.00 (0.00) t=15 0.00 (0.00) 1.00 * (1.97) 0.00 (0.00) t=16 0.00 (0.00) 1.00 (1.35) 0.00 (0.00) t=17 1.00 (1.56) 1.00 * (1.68) 1.00 ** (2.05) t=18 0.00 (0.00) 1.00 * (1.96) 0.00 (0.00) t=19 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) t=20 -1.00 * (1.96) 0.00 (0.00) 0.00 (0.00) [tau] = 0.85 Interactions Base Category Theory Applied Papers Applied Theory Constant 2.00 *** (17.81) 0.00 (0.00) 0.00 (0.00) t=-2 -1.00 *** (8.67) -1.00 *** (4.83) -0.50 * (1.78) t=-1 -1.00 *** (11.20) -0.50 ** (2.38) 0.00 (0.00) t=1 1.50 *** (7.49) 1.50 *** (4.99) 2.50 *** (4.24) t=2 3.00 *** (17.68) 3.25 *** (7.66) 3.00 *** (3.82) t=3 3.75 *** (15.57) 3.75 *** (7.78) 4.75 *** (6.80) t=4 4.00 *** (20.09) 4.50 *** (8.30) 5.00 *** (5.93) t=5 4.00 *** (18.04) 4.50 *** (6.71) 5.25 *** (5.73) t=6 3.75 *** (13.31) 4.75 *** (7.50) 5.50 *** (5.96) t-1 3.50 *** (11.47) 4.50 *** (6.57) 5.50 *** (6.61) t=8 3.00 *** (10.58) 5.00 *** (7.16) 5.50 *** (4.76) t=9 2.50 *** (8.33) 4.50 *** (6.52) 5.50 *** (4.10) t=10 2.25 *** (7.31) 3.75 *** (5.17) 5.75 *** (4.02) t=11 1.50 *** (5.93) 3.50 *** (5.17) 6.00 *** (4.39) t=12 1.00 *** (3.67) 3.25 *** (4.47) 4.50 *** (3.18) t=13 0.50 * (1.70) 3.00 *** (4.06) 4.50 *** (3.70) t=14 -0.25 (1.05) 3.75 *** (5.72) 4.75 *** (3.68) t=15 -1.00 *** (4.05) 2.50 *** (4.54) 4.00 *** (4.21) t=16 -1.50 *** (5.50) 1.50 *** (2.94) 3.50 *** (3.07) t=17 -2.00 *** (7.73) 1.50 *** (2.85) 3.50 *** (2.72) t=18 -2.75 *** (11.90) 1.25 ** (2.41) 3.75 *** (3.78) t=19 -3.50 *** (13.55) 1.00 * (1.91) 3.50 *** (3.34) t=20 -4.00 *** (17.96) 0.75 * (1.72) 1.50 (1.62) [tau] = 0.85 Interactions Econ. Methods Constant 0.25 (1.26) t=-2 -0.50 * (1.77) t=-1 0.00 (0.00) t=1 0.50 (1.08) t=2 0.75 (1.57) t=3 1.25 ** (2.18) t=4 1.50 ** (2.44) t=5 1.50 *** (2.67) t=6 1.75 ** (2.25) t-1 1.50 * (1.87) t=8 2.00 ** (2.45) t=9 2.25 ** (2.36) t=10 1.75 ** (2.36) t=11 2.00 ** (1.98) t=12 1.75 * (1.73) t=13 2.00 ** (2.42) t=14 1.75 ** (1.98) t=15 2.25 ** (2.11) t=16 1.75 (1.63) t=17 1.00 (1.06) t=18 1.00 (1.17) t=19 1.50 (1.64) t=20 0.75 (1.04) Notes: The sample consists of 9,672 articles published in the top five journals between 1970 and 2000. It does not include notes, comments, announcements, or AER Papers and Proceedings issues. The base category is defined as follows: t equals 0, y equals 1970, theory papers for r and AER for journal dummies. All columns include controls for (1) the papers' characteristics [X.sub.j] (journal-of-publication dummies); and (2) year of citation fixed-effects ([[gamma].sub.y] ([tau])). Absolute values of bootstrapped t statistics are given in parentheses (1,000 iterations). * Significant at 10%; ** significant at 5%; and *** significant at 1%.

Printer friendly Cite/link Email Feedback | |

Author: | Anauati, Victoria; Galiani, Sebastian; Galvez, Ramiro H. |
---|---|

Publication: | Economic Inquiry |

Article Type: | Report |

Geographic Code: | 1USA |

Date: | Apr 1, 2016 |

Words: | 11388 |

Previous Article: | Are the disabled less loss averse? evidence from a natural policy experiment. |

Next Article: | The impact of school racial compositions on neighborhood racial compositions: evidence from school redistricting. |

Topics: |