# Is an article in a top journal a top article?

This study ranks 15 leading finance journals by the average number
of Social Sciences Citation Index cites per articles for articles
published in 1996. It also defines a "top article," compared
to an "article in a top journal. " Using different criteria for top articles, I examine the Type I error (a "top " article
is rejected by a particular decision rule, e.g., in top three journals)
and the Type II error (a "non-top" article is accepted as a
top article) for each journal and combinations of the journals. Due to
the high error rates, the results suggest that identifying top articles
requires looking beyond the Top 3 journals, as well as examining each
article more carefully for its intrinsic quality.

**********

A colleague was recently told by his dean that to be promoted to professor, he needed to have more publications in the "top three" finance journals. Another colleague, an assistant professor at another university, was told that only articles in the top three journals would count towards his promotion and tenure decision. But the top three journals were not the same in these two cases. Assistant professors are often told that they need publications in the top three journals or "top N" journals.

The purpose of this article is to investigate whether a "top N journals" approach provides a reasonable decision rule in terms of identifying top articles in the finance literature. I use Type 1 and II errors to evaluate the accuracy of these decision rules. I define a Type I error as one when a "top" article is rejected by the decision rule. I call this a Reject Top Article (RTA) error. 1 define a Type ]I error as one when a "non-top" article is accepted as a top article. I call Type II errors an Accept Non-Top Article (ANTA) error.

In this study, 1 rank 15 leading finance journals by the average number of Social Sciences Citation Index (SSCI) cites per articles from 1996 to January 2004 for articles published in those 15 journals in 1996. Recent studies often use Journal Citation Reports (JCR) impact factors to rank journals. The JCR impact factor for a journal is the number of cites in year t for articles published in years t-I and t-2 divided by the number of articles published in years t-1 and t-2. Given the possibly long life over which an article can be cited, as measured by citations the impact factor is a relatively short-term measure of the impact. Although I use one year of publications, 1996, the number of citations is over a much longer period than the JCR impact factors.

I also define a "top article" (the average number of cites is above the median, mean, 90'h percentile, or 95'h percentile for a set of leading finance journals) as opposed to an "article in a top journal." The results show that using the top three (JF, JFE, RFS) approach to identify top articles (based on the mean number of cites per article) leads to an RTA error 44% of the time and an ANTA error 33% of the time. I provide similar information for the 15 individual journals and for different levels of citations. I find that at the 90th and 95th percentiles, the RTA error decreases but the ANTA error increases dramatically. I examine the trade-off-between these two errors. My results demonstrate that if someone is interested in identifying top articles then the RTA and ANTA errors associated with a top three journals approach suggest that one should look at a broader set of journals and should examine each article more closely for its intrinsic quality, rather than the general quality of its journal.

The article is organized as follows. Section I discusses the literature and method. Section II presents the results. Section III summarizes and concludes.

I. Literature and Method

I examine the 5,979 SSCI cites (as collected on February 7-12, 2004) for 626 articles published in 1996 in 15 leading finance journals. The 15 journals include 11 of a core 16 set of journals, as defined and studied by Chan, Chen, and Steiner (2002) and four other journals included in other similar studies.

Chan et al. (2002) use Journal of Finance-equivalent sized pages in their 16 finance journals from 1990 to 2001 to rank finance programs. The set of 11 journals from their set of core 16 journals comprises Journal of Finance (JF), Journal of Financial Economics (JFE), Review of Financial Studies (RFS), Journal of Financial and Quantitative Analysis (JFQA), Journal of Business (JB), Journal of Financial Intermediation (JFI), Journal of International Money and Finance (JIMF), Financial Management (FM), Journal of Banking and Finance (JBF), Journal of Futures Markets (JFM), and Journal of Portfolio Management (JPM). I do not include Chan et al.'s other journals [Financial Analysts Journal (FAJ), Journal o[Financial Services Research (JFSR), Journal of Financial Research (J FR), Journal of Business Finance & Accounting (JBFA), and Financial Review (FR)] because these journals are not covered or fully covered by the SSCI during the sample period.

Chan et al. (2002) also rank programs based on the top three of JF, JFE and RFS. They also examine four other journals covered by the SSCI, Journal of Money Credit and Banking (JMCB), Journal of Risk and Insurance (JRI), Real Estate Economics (REE), and Journal of Real Estate Finance and Economics (JREFE). These journals examine a broad range of finance areas that consists of money and banking, real estate finance, and insurance.

I note here that the definition of the top three may vary by time and measure. For example, Niemi (1987) uses total research productivity (pages) in the top three of JK JFE, and JFQA from 1975 to 1986 to rank programs. Chung, Cox, and Mitchell (2001) also use citations in JF, JFE and JFQA to identify the most often cited authors from 1974 to 1998. Chung et al. (2001) note that they exclude RFS because of its shorter existence.

Alexander and Mabry (1994) use citations in the four leading finance journals (JF, JFE, JFQA and RFS) from January 1987 to March 1991 to rank journals. Their top three comprises JFE, JF, and JB, with JFQA in fourth place.

Arnold, Butler, Crack, and Altintig (2003) rank finance journals according to number of citations in JF, JFE, RFS, JB, JFQA, and FM during 1990-1999. They also rank journals by number of important papers, number of recent important papers, and an impact factor. The top five journals in all rankings are JF, JFE, RFS, JB and JFQA. However, the rankings varied with different measures. FM was in ninth place after Econometrica, Journal of Political Economy, and American Economic Review. The literature generally seems to agree on a top two of JF and JFE, but third place seems to be a tie between several candidates, RFS, JFQA, and JB.

Although the literature on finance journal quality most often uses citations as the best approach, they may suffer from an inherent bias, such as self-citing. These studies also cannot identify the perspective of individuals who might have different research interests or come from different geographic areas.

There are two approaches other than citations that can be used to rate journals or departments. One is to survey department chairs (e.g., Coe and Weinstock, 1983; and Borde, Cheney, and Madura, 1999), or to survey faculty (e.g., Oltheten, Theoharakis and Travlos, 2003; and Christoffersen, Englander, Arize, and Malindretos, 2001). The second approach is to rank departments on the basis of memberships on journal editorial boards (Chan and Fok, 2003).

I select 1996 because it gives a recent time frame of seven to eight years. This or an even shorter time period often equates with many evaluation situations, such as promotion and tenure decisions or filling new positions within a department. More recent records can also be used for public relations purposes to illustrate the quality of an academic program to alumni and potential students and donors. Chung et al. (2001) show that the number of citations for articles cited in JF, JFE, and JFQA between 1974 and 1998 increases sharply during the first three years after publication, reaches a peak during the fourth year, and then declines gradually after that. My sample period should contain the peak period for citations. However, to the extent that citations occur for these articles in later years their total citation records will be understated. For example, in 2002 the JCR's cited half-lives in years for the sample of journals are: JFE, JFQA, and JB (>10), JF (9.3), FM (9.2), JRI (9.1), JFM (8.5), JPM (7.9), RFS (7.8), JMCB (7.7), JIMF (7.3), JBF (6.5), JREFE (6.3), JFI (5.8), and REE (5.0). JCR defines the cited half-life as the number of years after the publication year, which account for 50% of citations received for an article.

Chung et al. (2001) identify the most cited authors from 1974 to 1998 using citations in JF, JFE, and JFQA. Chan (2001) examines citation data for 1998 and 1999 from JF, JFE, JFQA, and RFS to rank journals by citation proportions over several time lags.

In this article, I use a broader set of citations, the SSCI, which allows me to determine the relative influence of more specialized journals in areas such as banking, real estate, and insurance. These areas are less likely to be reflected in the top journals of JF, JFE, JFQA and RFS. For example, of the 15 journals 1 consider in this article, JMCB, JRI and REE rank in the 5'h, 10th, and 13th places, respectively, but in Chan's study JMCB is tied for 15th place with Journal of Empirical Finance and Journal of Financial Research, JRI is not ranked at all, and REE is in 27th place. These differences highlight the segmentation of these important areas and the lack of their recognition in the top three or four journals.

The SSCI cites approach contains cites in closely related fields of economics, accounting and other areas of business and law. The JCR impact factors that are often used to rank journals includes cites in these areas. Further, Borokhovich, Bricker, and Simkins (2000) find that approximately one-third of all citations to articles in JF and JFE come from journals outside finance, especially economics journals with 15.3% for JF and 19.4% for JFE. Chan (2001) examines the journals cited in the 1998 and 1999 issues of JF, JFE, RFS, and JFQA. In terms of the percentage of citations in this set of journals, 67.73, 18.73, 8.55, 2.72 and 2.26% were from finance, economics, statistics, accounting, and miscellaneous (business, law and regulator) journals, respectively. These two studies illustrate the importance of non-finance journals to the finance literature and vice versa, and support the usage of a broader set of citations to fully measure an article's impact on the literature.

Although the SSCI cites approach reflects a broader set of journals and fields than a top three or four approach, it does not include all cites. For example, in my study, I do not include some or all of the cites in other finance journals such as FAJ, JFSR, JFR, JBFA, FR, Journal of Real Estate Research (JRER), and Journal of Financial Markets (JFINMKT) because they are either not fully covered or covered at all by the SSCI during my sample period.

Another example of the differences in these areas is a 1995 survey of FMA members on rankings of journals in subfields by Christoffersen et al. (2001). They find the top four journals in corporate finance are JF, JFE, JFQA, and FM; the top two journals in financial institutions and markets are JBF and JMCB; the top journal in finance and insurance is JRI; the top journal in international finance is JIMF; the top three journals in investments are JPM, JF, and FAJ; and the top three journals in real estate finance are JREFE, REE, and JRER.

Using a global faculty survey, Oltheten et al. (2003) find that the research interest of the respondents (corporate finance, investments and derivatives, financial institutions, and international finance, institutions and markets), geographic location (North America, Europe, Asia, and Australia/New Zealand), seniority, and affiliation with a journal affects the quality perceptions.

Krishnan and Bricker (2004) attempt to separate the citation performance of an article for the year of publication and the next two years between the quality of the article and the value added by the journal. They use the author reputation and the school reputation as proxies for the quality of the article. They use journal age, editorial board quality, and readership characteristics as proxies for the value added by the journal. In their analysis of articles published from 1990 to 1998 in JF, JFE, RFS, JFQA, and JB, they find that JF, JFE and RFS add significant value in terms of citations over and above inherent article quality. For example, in their Table V, they find that after controlling for the article quality for the overall sample period, JF, RFS, JFE, JB, and JFQA add 4.1655, 3.1805, 2.8590, 0.4142 and 0.3967 SSCI cites, respectively. Only the coefficients for JF, RFS, and JFE are statistically significant.

From this brief review, it is clear that the literature related to the quality of journals has taken many approaches. The rankings for the survey studies, particularly for the highest ranked journals, are generally consistent with the rankings that use the citations-based measures.

As have past authors in this literature, 1 recognize the limitations of any one approach to determining the quality of a journal. Even with objective measures such as citations, there are limitations beyond the different citation measures that I can use. I consider author self-citations in my analysis.

I also note that a citation may be a "correction" citation, but in this case the citation may be more the reflection of a poor article, rather than a top article. To define a citation as a correction requires a subjective judgment by the citing author. Many articles extend other studies in the literature, which can be construed as correcting previous works or building on those earlier works. My experience suggests that citations are based on building on earlier research. In addition, an objective measure of a correction cite is difficult to define.

An article can be categorized by type of article, e.g., survey, empirical, theoretical, or subject area. The articles I cite in this study are not categorized by type of article. To the extent that a reviewer of an author's publication and citation record values these characteristics in different ways, then the value assigned to the citations associated with an article will vary by reviewer.

Another issue is the "hot topic" area. Suppose I could objectively define a hot topic area and that l could identify the time frame for the topic. The number of article citations would then appear to be positively related to how early in the sample's time frame the article appeared, the contribution of the article, and the number of articles published in the hot topic area. These factors are difficult to objectively define and separate. And again, based on these factors a reviewer may attach different values to the citations.

II. Results

Table I presents a comparison of 5,979 SSCI cites (as collected on February 7-12, 2004) for 626 articles published in 1996 in 15 leading finance journals. I sort the journals by the mean number of cites per article. JF is at the top with 24.3 cites, followed closely by JFE with 23.1 cites. RFS, JFQA, JMCB, and JB are next with 16, 13.2, 10.4, and 9.8 cites, respectively. I note that if the journals are rated by the cites in the 50'h percentile (median), JB is in fourth place, followed by JFQA and JMCB. These results are consistent with past studies, which suggests they are robust and generalizable to other years. However, I caution that rankings may vary some by measure and by year. This caution applies to all the studies on finance research productivity and rankings.

I use a test of differences of means with equal or unequal variances, given the situation, to perform a test of whether one journal's mean is higher than another's. I use this test for each pair of this group of six journals. Using a 5% level of significance, I find that JF and JFE are significantly higher than the other four; RFS is not significantly higher than JFQA or JMCB but is significantly higher than JB; and JFQA, JMCB, and JB are not significantly higher than each other. For all 15 journals, the median and mean average cites are four and 9.55, respectively.

Table I also provides information on the distribution of average cites. The table presents standard errors and more detailed information, from the minimum to the maximum in increments of every tenth percentile. Given the skewness of these distributions, more detailed information on the distributions allows a better picture of each of the 15 journals. Although these results for all 15 journals are not shown in the table, the percentage of articles with zero cites is 10%, with one or zero cites is 26%, with two or fewer cites is 37%, with three or fewer cites is 45%, and with four or fewer cites is 53%. For example, if this set of journals is truly representative of the finance literature, then having more than four cites over the seven to eight years after publication places an article in the upper half of the distribution. If I use the mean of average cites of 9.55 as a measure of central tendency, then ten or more cites places an article in the top 29%. For the all 15 journals, if an article is going to place in the top 10%, then it must he cited 25 or more times.

When researchers examine an article's citation record, they often include information on self-cites, l define an article self-cite as a cite where one or more of the authors is an author of the citing article. Author self-cites should not be confused with journal self-cites, which are cites to articles in the same journal, e.g., Chan, Fok, and Pan (2000). Since the focus in my study is the article, I examine only author self-cites. The percentages of author self-cites for the 15 journals vary from lows of 4% and 5.2% in JPM and JFQA, respectively, to highs of 13.3% and 22.3% percent in JB and JREFE, respectively.

I also provide the means and medians without self-cites. The rankings by the means without self-cites are the same as the rankings of the means with self-cites. The medians for both measures for all 15 journals, and for 11 of the 15 journals, are the same. Compared to the medians with self-cites, the medians without self-cites decrease by one for JF, RFS, and JMCB, and by two for JB. The rankings for the two median measures are similar. The bottom two rows of Table I show the statistics for the two all-15 distributions. Using a paired two-sample means t-test, the difference in means is significant at the 0.0001 level. However, except for the differences in means of 0.77 cites, the distributions appear similar. If an article's citations include self-cites, evaluating that record may partially or completely discount those cites.

Table II presents information for the 15 journals by the percentage of articles that meet certain measures that could define a "top article." In Table II, I use the measures of median, mean, 90'h percentile, and 95 percentile for all 15 journals to define a top article. If I define a top article as one that has above the median of four cites per article, then the ANTA error is rerelatively low for JF (6%) and JFE (14%). JB, RFS, and JFQA are clustered around 26% to 28%, followed by JFI and JMCB at 33% and 47%, respectively. When I define a top article as being above the All 15 mean of 9.55 cites per article (also the top 29% of All 15 articles, as shown in the last row), the error of accepting a non-top article increases for the top six journals. For JF and JFE, the error rates are 30% and 26%, respectively, followed by RFS at 46% and JFQA at 55%. The rest of the journals range between 63% and 100%. When I define a top article as being above the 90th percentile of 24 cites per article, the error of accepting a non-top article increases dramatically even for the top two journals, JF and JFE with errors of 65% and 62%, respectively, followed by JFQA and RFS with error rates of 79% and 84%, respectively. The error rates for the other journals range from 89% to 100%. When I use above the 95th percentile of 35 cites per article, the error of accepting a non-top article is very high, even for the top two journals, with errors of 77% and 83%, respectively. RFS, JFQA, JMCB, and JIMF are very high at 89%, 93%, 95%, and 98%, respectively. The error rates for the other journals are 100%.

The ANTA error can be viewed on an individual journal basis, as in Table II, or on a cumulative top N journals basis as in Table III and Figure 1. Table III examines the ANTA errors on a cumulative basis, i.e., as the number N in the "top N" journals increases. Figure 1 shows the same information in graphic form.

[FIGURE 1 OMITTED]

When I use the median or mean as the measure of a top article, the results for the top four or top seven journals are not much higher than for the top three journals. For example, the ANTA error for the median measure is 15% for the top three journals and 17% and 25% for the top four and top seven journals, respectively. The rate of increase is lower when I use the 90th and 95th percentiles, because the errors start at higher levels.

Table IV examines the RTA errors on a cumulative basis, i.e., as the number N in the top N journals increases. Figure 2 shows the same information in graphic form. Using above the median to define a top article, the cumulative RTA error for the top three journals (JF, JFE, and RFS) is 56%. In other words, a decision rule of accepting only articles in these three journals rejects 56% of the top articles in this category. A top five journals approach rejects 39% of the top articles. A top ten journals (JF to JRI) approach reduces the RTA error to 12%. If I use above the mean to define a top article, the cumulative RTA error of rejecting a top article with a top three journals of JF, JFE, and RFS is 44%. A top five journals approach rejects 26% of the top articles. A top nine (JF to JBF) reduces the RTA error to 8%. Using above the 90th percentile as the definition of a top article, the cumulative RTA error of rejecting a top article with a top three journals of JF, JFE, and RFS is 26%. The error decreases to 8% with the addition of JFQA and JMCB. Using above the 95th percentile as the definition of a top article, the RTA error with the top three of JF, JFE, and RFS is 18%. The error decreases to 12% when I add JFQA and to 3% when I add JMCB, JB, and JFI.

[FIGURE 2 OMITTED]

Figure 2 shows that as the definition of a top article increases from the median to the 95th percentile, the initial slopes of the lines increase and the lines level off much faster.

When I compare the effects of the Type I and II errors of a decision rule, I usually assign a cost to each type of error. When I compare the ANTA and RTA errors, there does not appear to be a difference in the costs. If I wish to assign different costs to the two errors, I should weight the error or costs in some manner. I assume that the costs are equivalent, and so combine the ANTA and RTA errors and show them in Table V and Figure 3.

[FIGURE 3 OMITTED]

Across each row in Table V, I see that the combined error increases as the definition increases from median to 95th percentile. Looking down the four columns related to the different definitions and Figure 3, I see that the combined errors reach a minimum, but then seem to plateau. For example, for the above the median definition of a top article the combined error decreases from 71% for a top three approach to a minimum of 49% for a top 12-13 approach, and only increases to 53% for all 15 journals.

I also ask how well impact factors predict the future cites. With my sample, all 15 journals have a JCR impact factor in each year from 1996 to 2002.

JCR defines an impact factor for a journal as the number of cites in year t for articles published in years t-1 and t-2 divided by the number of articles published in years t-1 and t-2. This definition is a relatively short-term measure of the impact as measured by citations, and does not necessarily predict citation performance. However, to the extent that people use impact factors as measures of potential cites, they bestow a predictive power on the factors.

Table VI presents the JCR impact factors for each year, the average impact factors from 1996 to 2002, and the mean actual cites for the sample period. Table VI also presents the estimated number of cites, based on a regression with the mean cites for each journal as the dependent variable and its average impact factor as the independent variable. The regression has an adjusted [R.sup.2] of 0.859 and is significant at the 0.001 level.

There is little doubt that the mean number of journal cites is highly correlated with average impact factors. However, the actual cites for some specific journals are significantly different from the estimated cites. For example, JFQA's actual mean cites are 185% of the estimated cites and FM's actual mean cites are 38% of the estimated cites. The standardized residuals for JFQA and FM are outside the 95% confidence intervals for the estimates. Even with the high [R.sup.2] for the regression model, researchers need to be careful about using it to predict cites for a specific journal.

I also note that the ranks differ between the average impact factor (IF) and the actual mean cites. JFQA, JB, JRI, JFM, FM, and JREFE differ by at least two ranks. This example shows how similar measures can provide quite different ranks and provides another example of how difficult it is to pick an "N set" of top journals.

I run a similar regression model, this time using the actual cites per article as the dependent variable for the 626 articles in the 1996 issues of the 15 journals. The model has an adjusted [R.sup.2] of 0.261, which is significant at the 0.001 level. The estimated model is similar to the previous regression model in that the estimated cites per article is equal to -0.188 + (9.7 X Average impact Factor). There is little doubt that the actual article cites are correlated with the average impact factors. However, ill look at the results from another perspective, 73.9% of the variation is unexplained. These results indicate that with this sample there is a strong positive overall relation between JCR impact factors and actual cites. However, again, the researcher should be very careful about using an impact factor's predictive ability for a specific journal and even more careful about using an impact factor's predictive ability for a specific article.

I ask, "Can the total number of cites in lesser journals or cites in journals not included on the list meet the impact test?" For example, one professor has two FM articles and each has five cites. Another professor has one RFS article with seven cites. Which one makes the larger impact on the literature? I can make the case that the two FM articles have a larger impact than does the one RFS article.

But I note that cites in some journals are not captured in the SSCI database. Should researchers consider cites in journals not captured in the SSCI database? Other types of articles may not appear on the journal lists, such as articles in the publications of federal regulators such as regional Federal Reserve Banks or publications of major trade organizations such as the Appraisal Journal. These journals may not appear on a formal list of journals, but some of these articles may be cited in the SSCI database. Should the impact of these articles be considered?

To the extent that a school is more interested in the actual impact of a professor's work than the journals in which the articles appear, the school should ask for more detailed information. In addition to a regular publication list, an evaluator may want to see a list of those journals and articles that cite a person's collection of works. Fishe (1998) examines such information in his analysis of research standards for promotion to full professor at 90 US finance departments ranked one to 96 in Borokhovich, Bricker, Brunarski, and Simkins (1995). Fishe (1998) shows that 26 of the 76 professors promoted to full professor at the finance departments rated 21 to 96 did not publish in one of the top three (JF, JFE, RFS), but many of them did have numerous SSCI cites. The number of SSCI cites per year since Ph.D. degree for the 76 professors ranged from 0.1 to 124.8 with a median of 7.0 and a 25th percentile of 3.9.

Fishe (1998) also provides the cumulative cites per year since Ph.D. degree for each professor. I combine and reorganize his data to shed further light on the subject in my paper. For each grouping of individuals by number of top three articles, the number of professors and the mean cites per year are: zero top three articles (n = 26, 6.59 cites); one top three article (n = 12, 7.88 cites); two to three top three articles (n = 15, 8.67 cites); four to five top three articles (n = 12, 8.98 cites); and more than five top three articles (n = 11,34.05 cites). (1)

One result of this analysis is that the mean number of cites per year for the different groups does not become statistically significantly different at the 0.05 level until a professor has more than five top three articles.

When I run a regression of the 76 professors using the cites/year as the dependent variable and the number of top three articles as the independent variable, I find that the adjusted [R.sup.2] is 0.357 and the model is significant at the 0.001 level. When I run the same regression by dropping the 11 observations with more than five top three articles, the adjusted [R.sup.2] drops to -0.004 and is not significant at any reasonable level of confidence.

Fishe's (1998) work illustrates that approximately one-third of 76 lull professors are able to have an impact on the literature, as measured by SSCI cites per year, without publishing even one article in JF, JFE, and RFS. These professors make an impact by publishing in other finance, economics, and business journals. Based on this study, the citation list may be a better reflection of an individual's impact on the literature than that person's publication list.

Other information that reflects a person's impact on the literature may include the relative percentage of self-cites or the number of co-authors for each article cited. If an article is newly published, then the journal ranking may be the best information on the potential impact of a work. However, if the article is three years old or older, then an evaluator should require a record of where that work has been cited. If the evaluator does not ask for this information, then he or she is more likely to accept a non-top article in a top journal as a top article, or reject a top article in a non-top journal.

Even with a complete citation record for an article, evaluators can continue to disagree. As discussed earlier in this article, an evaluator might consider many characteristics related to a citation. Where is the article cited? Is the citation a self-cite? Is the evaluator familiar with the cited or citing journals because of his or her research interests or geographic location? Is the cited work being corrected in the citing article or is the citing article extending the work of the cited article? Does the cited article provide a survey of previous literature, an empirical test of an already established theory, or an important new theoretical insight? All three article types provide information, but each evaluator might view the relative contributions differently.

Another evaluative factor that is pertinent to this article is the citation record for an article relative to its journal. Krishnan and Bricker (2004) find that JF, JFE, and RFS add significant value in terms of citation over and above inherent article quality, but JFQA and JB do not. Krishnan and Bricker's paper indicates that the number of citations may be a function of the article quality and the method of dissemination, i.e. the journal. Referring back to Table I, six citations for a JF article would put it in the bottom quartile and at 25% of the mean cites of 24.3 for that journal. Six citations for a JBF article would put it in the tipper quartile and at 130'% of the mean of 4.6 cites for that journal. Given Krishnan and Bricker's results, an evaluator might argue that when he or she controls for the journal of publication, the six cites in JBF show a higher quality than do the six cites in JF.

III. Summary and Conclusion

In this study, I rank 15 leading finance journals by the average number of Social Sciences Citation Index (SSCI) cites per articles from 1996 to January 2004 for articles published in those 15 journals in 1996. Other recent studies often use Journal Citation Reports (JCR) impact factors to rank journals. Although my study uses one year of publications, 1996, the number of citations is over a much longer period than one year as with the JCR impact factors.

I also define and analyze a "top article" (average number of cites is above the median, mean, or 90th or 95th percentiles, for a set of leading finance journals) as opposed to a "top journal article." The results show that a top three (JF, JFE, RFS) approach to identifying top articles (based on being above the mean number of 9.55 cites per article) leads to a Type I error (a top article is rejected by the decision rule) 44% of the time and a Type II error (a non-top article is accepted as a top article) 33% of the time. I provide similar information for the 15 individual journals and for different levels of citations. At the 90th and 95th percentiles, the Type I error decreases but the Type II error increases dramatically. These results demonstrate that if an evaluator is interested in identifying top articles, the Type I and II errors associated with a top three journals approach should lead him or her to look at a broader set of journals and to examine each article for its intrinsic value, rather than the general quality of its journal.

I also examine the relation between the JCR impact factors for the 15 journals and their actual number of cites. The first regression uses the mean cites per journal as the dependent variable and the average JCR impact factor (1996-2002) as the independent variable. A second regression uses the actual cites per article as the dependent variable and the average JCR impact factor as the independent variable. The results indicate that there is a strong overall positive relation between JCR impact factors and actual cites. However, I urge caution on an impact factor's predictive ability for a specific journal or a specific journal article.

Based on this study, the citation list may be a better reflection of an individual's impact on the literature than that person's publication list, and an evaluator may want a record of where that work has been cited. If the evaluator does not solicit or develop this information, then he or she is more likely to accept a non-top article in a top journal as a top article, or reject a top article in a non-top journal.

The author thanks Pat Fishe, Melissa Frye and Drew Winters very useful comments. The article benefited greatly from comments by Lemma Senbet and Alex Triantis (the Editors) and two anonymous referees.

(1) Fishe (1998) also provides similar information on 51 full professors in finance programs ranked 1-20. The average number of top three articles was 6.45 and the mean cites per year was 37.5 cites.

References

Alexander, J.C. and R.H. Mabry, 1994, "Relative Significance of Journals, Authors, and Articles Cited in Financial Research," Journal of Finance 49, 697-712.

Arnold, T., A.W. Butler, T.F. Crack, and A. Altintig, 2003, "Impact: What Influences Finance Research?" Journal of Business 76, 343-362.

Borde, S.F., J.M. Cheney, and J. Madura, 1999, "A Note on Perceptions of Finance Journal Quality," Review of Quantitative Finance and Accounting 12, 89-96.

Borokhovich, K.A., R.J. Bricker, K.R. Brunarski, and B.J. Simkins, 1995, "Finance Research Productivity and Influence," Journal of Finance 50, 1691-1717.

Borokhovich, K.A., R.J. Bricker, and B.J. Simkins, 2000, "An Analysis of Finance Journal Impact Factors," Journal of Finance 55, 1457-1469.

Chan, K.C., 2001, "A Citation-based Ranking of Journals in Financial Research: Some New Results," Journal of Financial Education 27, 36-52.

Chan, K.C., C.R. Chen, and T.L. Steiner, 2002, "Production in the Finance Literature, Institutional Reputation, and Labor Mobility in Academia: A Global Perspective," Financial Management, 31, 131-156.

Chan, K.C. and R.C.W. Fok, 2003, "Membership on Editorial Boards and Finance Department Rankings," Journal of Financial Research 26, 405-420.

Chan, K.C., R.C.W. Fok, and M. Pan, 2000, "Citation-Based Finance Journal Rankings: An Update," Financial Practice and Education 10, 132-141.

Christoffersen, S., F. Englander, A.C. Arize, and J. Malindretos, 2001, "Sub-Field Specific Rankings of Finance Journals," Journal of Financial Education 27, 37-49.

Chung, K.H., R.A.K. Cox, and J.B. Mitchell, 2001, "Citation Patterns in the Finance Literature," Financial Management 30, 99-118.

Coe, R.K. and I. Weinstock, 1983, "Evaluating the Finance Journals: The Department Chairperson's Perspective," Journal of Financial Research 6, 345-349.

Fishe, R.P.H., 1998, "What Are the Research Standards for Full Professor of Finance?" Journal of Finance 53, 1053-1079.

Krishnan, C.N.V. and Robert Bricker, 2004, "Top Finance Journals: Do They Add Value?" Journal of Economics and Finance (Forthcoming).

Niemi, A.W. Jr., 1987, "Institutional Contributions to the Leading Finance Journals, 1975-1986: A Note," Journal of Finance 42, 1389-1397.

Oltheten, E., V. Theoharakis, and N.G. Travlos, 2003, "Faculty Perceptions and Readership Patterns of Finance Journals: A Global View," Journal of Financial and Quantitative Analysis (Forthcoming).

Stanley D. Smith, Stanley D. Smith is the SunTrust Chair of Banking and a Professor of Finance at the University of Central Florida in Orlando, FL.

**********

A colleague was recently told by his dean that to be promoted to professor, he needed to have more publications in the "top three" finance journals. Another colleague, an assistant professor at another university, was told that only articles in the top three journals would count towards his promotion and tenure decision. But the top three journals were not the same in these two cases. Assistant professors are often told that they need publications in the top three journals or "top N" journals.

The purpose of this article is to investigate whether a "top N journals" approach provides a reasonable decision rule in terms of identifying top articles in the finance literature. I use Type 1 and II errors to evaluate the accuracy of these decision rules. I define a Type I error as one when a "top" article is rejected by the decision rule. I call this a Reject Top Article (RTA) error. 1 define a Type ]I error as one when a "non-top" article is accepted as a top article. I call Type II errors an Accept Non-Top Article (ANTA) error.

In this study, 1 rank 15 leading finance journals by the average number of Social Sciences Citation Index (SSCI) cites per articles from 1996 to January 2004 for articles published in those 15 journals in 1996. Recent studies often use Journal Citation Reports (JCR) impact factors to rank journals. The JCR impact factor for a journal is the number of cites in year t for articles published in years t-I and t-2 divided by the number of articles published in years t-1 and t-2. Given the possibly long life over which an article can be cited, as measured by citations the impact factor is a relatively short-term measure of the impact. Although I use one year of publications, 1996, the number of citations is over a much longer period than the JCR impact factors.

I also define a "top article" (the average number of cites is above the median, mean, 90'h percentile, or 95'h percentile for a set of leading finance journals) as opposed to an "article in a top journal." The results show that using the top three (JF, JFE, RFS) approach to identify top articles (based on the mean number of cites per article) leads to an RTA error 44% of the time and an ANTA error 33% of the time. I provide similar information for the 15 individual journals and for different levels of citations. I find that at the 90th and 95th percentiles, the RTA error decreases but the ANTA error increases dramatically. I examine the trade-off-between these two errors. My results demonstrate that if someone is interested in identifying top articles then the RTA and ANTA errors associated with a top three journals approach suggest that one should look at a broader set of journals and should examine each article more closely for its intrinsic quality, rather than the general quality of its journal.

The article is organized as follows. Section I discusses the literature and method. Section II presents the results. Section III summarizes and concludes.

I. Literature and Method

I examine the 5,979 SSCI cites (as collected on February 7-12, 2004) for 626 articles published in 1996 in 15 leading finance journals. The 15 journals include 11 of a core 16 set of journals, as defined and studied by Chan, Chen, and Steiner (2002) and four other journals included in other similar studies.

Chan et al. (2002) use Journal of Finance-equivalent sized pages in their 16 finance journals from 1990 to 2001 to rank finance programs. The set of 11 journals from their set of core 16 journals comprises Journal of Finance (JF), Journal of Financial Economics (JFE), Review of Financial Studies (RFS), Journal of Financial and Quantitative Analysis (JFQA), Journal of Business (JB), Journal of Financial Intermediation (JFI), Journal of International Money and Finance (JIMF), Financial Management (FM), Journal of Banking and Finance (JBF), Journal of Futures Markets (JFM), and Journal of Portfolio Management (JPM). I do not include Chan et al.'s other journals [Financial Analysts Journal (FAJ), Journal o[Financial Services Research (JFSR), Journal of Financial Research (J FR), Journal of Business Finance & Accounting (JBFA), and Financial Review (FR)] because these journals are not covered or fully covered by the SSCI during the sample period.

Chan et al. (2002) also rank programs based on the top three of JF, JFE and RFS. They also examine four other journals covered by the SSCI, Journal of Money Credit and Banking (JMCB), Journal of Risk and Insurance (JRI), Real Estate Economics (REE), and Journal of Real Estate Finance and Economics (JREFE). These journals examine a broad range of finance areas that consists of money and banking, real estate finance, and insurance.

I note here that the definition of the top three may vary by time and measure. For example, Niemi (1987) uses total research productivity (pages) in the top three of JK JFE, and JFQA from 1975 to 1986 to rank programs. Chung, Cox, and Mitchell (2001) also use citations in JF, JFE and JFQA to identify the most often cited authors from 1974 to 1998. Chung et al. (2001) note that they exclude RFS because of its shorter existence.

Alexander and Mabry (1994) use citations in the four leading finance journals (JF, JFE, JFQA and RFS) from January 1987 to March 1991 to rank journals. Their top three comprises JFE, JF, and JB, with JFQA in fourth place.

Arnold, Butler, Crack, and Altintig (2003) rank finance journals according to number of citations in JF, JFE, RFS, JB, JFQA, and FM during 1990-1999. They also rank journals by number of important papers, number of recent important papers, and an impact factor. The top five journals in all rankings are JF, JFE, RFS, JB and JFQA. However, the rankings varied with different measures. FM was in ninth place after Econometrica, Journal of Political Economy, and American Economic Review. The literature generally seems to agree on a top two of JF and JFE, but third place seems to be a tie between several candidates, RFS, JFQA, and JB.

Although the literature on finance journal quality most often uses citations as the best approach, they may suffer from an inherent bias, such as self-citing. These studies also cannot identify the perspective of individuals who might have different research interests or come from different geographic areas.

There are two approaches other than citations that can be used to rate journals or departments. One is to survey department chairs (e.g., Coe and Weinstock, 1983; and Borde, Cheney, and Madura, 1999), or to survey faculty (e.g., Oltheten, Theoharakis and Travlos, 2003; and Christoffersen, Englander, Arize, and Malindretos, 2001). The second approach is to rank departments on the basis of memberships on journal editorial boards (Chan and Fok, 2003).

I select 1996 because it gives a recent time frame of seven to eight years. This or an even shorter time period often equates with many evaluation situations, such as promotion and tenure decisions or filling new positions within a department. More recent records can also be used for public relations purposes to illustrate the quality of an academic program to alumni and potential students and donors. Chung et al. (2001) show that the number of citations for articles cited in JF, JFE, and JFQA between 1974 and 1998 increases sharply during the first three years after publication, reaches a peak during the fourth year, and then declines gradually after that. My sample period should contain the peak period for citations. However, to the extent that citations occur for these articles in later years their total citation records will be understated. For example, in 2002 the JCR's cited half-lives in years for the sample of journals are: JFE, JFQA, and JB (>10), JF (9.3), FM (9.2), JRI (9.1), JFM (8.5), JPM (7.9), RFS (7.8), JMCB (7.7), JIMF (7.3), JBF (6.5), JREFE (6.3), JFI (5.8), and REE (5.0). JCR defines the cited half-life as the number of years after the publication year, which account for 50% of citations received for an article.

Chung et al. (2001) identify the most cited authors from 1974 to 1998 using citations in JF, JFE, and JFQA. Chan (2001) examines citation data for 1998 and 1999 from JF, JFE, JFQA, and RFS to rank journals by citation proportions over several time lags.

In this article, I use a broader set of citations, the SSCI, which allows me to determine the relative influence of more specialized journals in areas such as banking, real estate, and insurance. These areas are less likely to be reflected in the top journals of JF, JFE, JFQA and RFS. For example, of the 15 journals 1 consider in this article, JMCB, JRI and REE rank in the 5'h, 10th, and 13th places, respectively, but in Chan's study JMCB is tied for 15th place with Journal of Empirical Finance and Journal of Financial Research, JRI is not ranked at all, and REE is in 27th place. These differences highlight the segmentation of these important areas and the lack of their recognition in the top three or four journals.

The SSCI cites approach contains cites in closely related fields of economics, accounting and other areas of business and law. The JCR impact factors that are often used to rank journals includes cites in these areas. Further, Borokhovich, Bricker, and Simkins (2000) find that approximately one-third of all citations to articles in JF and JFE come from journals outside finance, especially economics journals with 15.3% for JF and 19.4% for JFE. Chan (2001) examines the journals cited in the 1998 and 1999 issues of JF, JFE, RFS, and JFQA. In terms of the percentage of citations in this set of journals, 67.73, 18.73, 8.55, 2.72 and 2.26% were from finance, economics, statistics, accounting, and miscellaneous (business, law and regulator) journals, respectively. These two studies illustrate the importance of non-finance journals to the finance literature and vice versa, and support the usage of a broader set of citations to fully measure an article's impact on the literature.

Although the SSCI cites approach reflects a broader set of journals and fields than a top three or four approach, it does not include all cites. For example, in my study, I do not include some or all of the cites in other finance journals such as FAJ, JFSR, JFR, JBFA, FR, Journal of Real Estate Research (JRER), and Journal of Financial Markets (JFINMKT) because they are either not fully covered or covered at all by the SSCI during my sample period.

Another example of the differences in these areas is a 1995 survey of FMA members on rankings of journals in subfields by Christoffersen et al. (2001). They find the top four journals in corporate finance are JF, JFE, JFQA, and FM; the top two journals in financial institutions and markets are JBF and JMCB; the top journal in finance and insurance is JRI; the top journal in international finance is JIMF; the top three journals in investments are JPM, JF, and FAJ; and the top three journals in real estate finance are JREFE, REE, and JRER.

Using a global faculty survey, Oltheten et al. (2003) find that the research interest of the respondents (corporate finance, investments and derivatives, financial institutions, and international finance, institutions and markets), geographic location (North America, Europe, Asia, and Australia/New Zealand), seniority, and affiliation with a journal affects the quality perceptions.

Krishnan and Bricker (2004) attempt to separate the citation performance of an article for the year of publication and the next two years between the quality of the article and the value added by the journal. They use the author reputation and the school reputation as proxies for the quality of the article. They use journal age, editorial board quality, and readership characteristics as proxies for the value added by the journal. In their analysis of articles published from 1990 to 1998 in JF, JFE, RFS, JFQA, and JB, they find that JF, JFE and RFS add significant value in terms of citations over and above inherent article quality. For example, in their Table V, they find that after controlling for the article quality for the overall sample period, JF, RFS, JFE, JB, and JFQA add 4.1655, 3.1805, 2.8590, 0.4142 and 0.3967 SSCI cites, respectively. Only the coefficients for JF, RFS, and JFE are statistically significant.

From this brief review, it is clear that the literature related to the quality of journals has taken many approaches. The rankings for the survey studies, particularly for the highest ranked journals, are generally consistent with the rankings that use the citations-based measures.

As have past authors in this literature, 1 recognize the limitations of any one approach to determining the quality of a journal. Even with objective measures such as citations, there are limitations beyond the different citation measures that I can use. I consider author self-citations in my analysis.

I also note that a citation may be a "correction" citation, but in this case the citation may be more the reflection of a poor article, rather than a top article. To define a citation as a correction requires a subjective judgment by the citing author. Many articles extend other studies in the literature, which can be construed as correcting previous works or building on those earlier works. My experience suggests that citations are based on building on earlier research. In addition, an objective measure of a correction cite is difficult to define.

An article can be categorized by type of article, e.g., survey, empirical, theoretical, or subject area. The articles I cite in this study are not categorized by type of article. To the extent that a reviewer of an author's publication and citation record values these characteristics in different ways, then the value assigned to the citations associated with an article will vary by reviewer.

Another issue is the "hot topic" area. Suppose I could objectively define a hot topic area and that l could identify the time frame for the topic. The number of article citations would then appear to be positively related to how early in the sample's time frame the article appeared, the contribution of the article, and the number of articles published in the hot topic area. These factors are difficult to objectively define and separate. And again, based on these factors a reviewer may attach different values to the citations.

II. Results

Table I presents a comparison of 5,979 SSCI cites (as collected on February 7-12, 2004) for 626 articles published in 1996 in 15 leading finance journals. I sort the journals by the mean number of cites per article. JF is at the top with 24.3 cites, followed closely by JFE with 23.1 cites. RFS, JFQA, JMCB, and JB are next with 16, 13.2, 10.4, and 9.8 cites, respectively. I note that if the journals are rated by the cites in the 50'h percentile (median), JB is in fourth place, followed by JFQA and JMCB. These results are consistent with past studies, which suggests they are robust and generalizable to other years. However, I caution that rankings may vary some by measure and by year. This caution applies to all the studies on finance research productivity and rankings.

I use a test of differences of means with equal or unequal variances, given the situation, to perform a test of whether one journal's mean is higher than another's. I use this test for each pair of this group of six journals. Using a 5% level of significance, I find that JF and JFE are significantly higher than the other four; RFS is not significantly higher than JFQA or JMCB but is significantly higher than JB; and JFQA, JMCB, and JB are not significantly higher than each other. For all 15 journals, the median and mean average cites are four and 9.55, respectively.

Table I also provides information on the distribution of average cites. The table presents standard errors and more detailed information, from the minimum to the maximum in increments of every tenth percentile. Given the skewness of these distributions, more detailed information on the distributions allows a better picture of each of the 15 journals. Although these results for all 15 journals are not shown in the table, the percentage of articles with zero cites is 10%, with one or zero cites is 26%, with two or fewer cites is 37%, with three or fewer cites is 45%, and with four or fewer cites is 53%. For example, if this set of journals is truly representative of the finance literature, then having more than four cites over the seven to eight years after publication places an article in the upper half of the distribution. If I use the mean of average cites of 9.55 as a measure of central tendency, then ten or more cites places an article in the top 29%. For the all 15 journals, if an article is going to place in the top 10%, then it must he cited 25 or more times.

When researchers examine an article's citation record, they often include information on self-cites, l define an article self-cite as a cite where one or more of the authors is an author of the citing article. Author self-cites should not be confused with journal self-cites, which are cites to articles in the same journal, e.g., Chan, Fok, and Pan (2000). Since the focus in my study is the article, I examine only author self-cites. The percentages of author self-cites for the 15 journals vary from lows of 4% and 5.2% in JPM and JFQA, respectively, to highs of 13.3% and 22.3% percent in JB and JREFE, respectively.

I also provide the means and medians without self-cites. The rankings by the means without self-cites are the same as the rankings of the means with self-cites. The medians for both measures for all 15 journals, and for 11 of the 15 journals, are the same. Compared to the medians with self-cites, the medians without self-cites decrease by one for JF, RFS, and JMCB, and by two for JB. The rankings for the two median measures are similar. The bottom two rows of Table I show the statistics for the two all-15 distributions. Using a paired two-sample means t-test, the difference in means is significant at the 0.0001 level. However, except for the differences in means of 0.77 cites, the distributions appear similar. If an article's citations include self-cites, evaluating that record may partially or completely discount those cites.

Table II presents information for the 15 journals by the percentage of articles that meet certain measures that could define a "top article." In Table II, I use the measures of median, mean, 90'h percentile, and 95 percentile for all 15 journals to define a top article. If I define a top article as one that has above the median of four cites per article, then the ANTA error is rerelatively low for JF (6%) and JFE (14%). JB, RFS, and JFQA are clustered around 26% to 28%, followed by JFI and JMCB at 33% and 47%, respectively. When I define a top article as being above the All 15 mean of 9.55 cites per article (also the top 29% of All 15 articles, as shown in the last row), the error of accepting a non-top article increases for the top six journals. For JF and JFE, the error rates are 30% and 26%, respectively, followed by RFS at 46% and JFQA at 55%. The rest of the journals range between 63% and 100%. When I define a top article as being above the 90th percentile of 24 cites per article, the error of accepting a non-top article increases dramatically even for the top two journals, JF and JFE with errors of 65% and 62%, respectively, followed by JFQA and RFS with error rates of 79% and 84%, respectively. The error rates for the other journals range from 89% to 100%. When I use above the 95th percentile of 35 cites per article, the error of accepting a non-top article is very high, even for the top two journals, with errors of 77% and 83%, respectively. RFS, JFQA, JMCB, and JIMF are very high at 89%, 93%, 95%, and 98%, respectively. The error rates for the other journals are 100%.

The ANTA error can be viewed on an individual journal basis, as in Table II, or on a cumulative top N journals basis as in Table III and Figure 1. Table III examines the ANTA errors on a cumulative basis, i.e., as the number N in the "top N" journals increases. Figure 1 shows the same information in graphic form.

[FIGURE 1 OMITTED]

When I use the median or mean as the measure of a top article, the results for the top four or top seven journals are not much higher than for the top three journals. For example, the ANTA error for the median measure is 15% for the top three journals and 17% and 25% for the top four and top seven journals, respectively. The rate of increase is lower when I use the 90th and 95th percentiles, because the errors start at higher levels.

Table IV examines the RTA errors on a cumulative basis, i.e., as the number N in the top N journals increases. Figure 2 shows the same information in graphic form. Using above the median to define a top article, the cumulative RTA error for the top three journals (JF, JFE, and RFS) is 56%. In other words, a decision rule of accepting only articles in these three journals rejects 56% of the top articles in this category. A top five journals approach rejects 39% of the top articles. A top ten journals (JF to JRI) approach reduces the RTA error to 12%. If I use above the mean to define a top article, the cumulative RTA error of rejecting a top article with a top three journals of JF, JFE, and RFS is 44%. A top five journals approach rejects 26% of the top articles. A top nine (JF to JBF) reduces the RTA error to 8%. Using above the 90th percentile as the definition of a top article, the cumulative RTA error of rejecting a top article with a top three journals of JF, JFE, and RFS is 26%. The error decreases to 8% with the addition of JFQA and JMCB. Using above the 95th percentile as the definition of a top article, the RTA error with the top three of JF, JFE, and RFS is 18%. The error decreases to 12% when I add JFQA and to 3% when I add JMCB, JB, and JFI.

[FIGURE 2 OMITTED]

Figure 2 shows that as the definition of a top article increases from the median to the 95th percentile, the initial slopes of the lines increase and the lines level off much faster.

When I compare the effects of the Type I and II errors of a decision rule, I usually assign a cost to each type of error. When I compare the ANTA and RTA errors, there does not appear to be a difference in the costs. If I wish to assign different costs to the two errors, I should weight the error or costs in some manner. I assume that the costs are equivalent, and so combine the ANTA and RTA errors and show them in Table V and Figure 3.

[FIGURE 3 OMITTED]

Across each row in Table V, I see that the combined error increases as the definition increases from median to 95th percentile. Looking down the four columns related to the different definitions and Figure 3, I see that the combined errors reach a minimum, but then seem to plateau. For example, for the above the median definition of a top article the combined error decreases from 71% for a top three approach to a minimum of 49% for a top 12-13 approach, and only increases to 53% for all 15 journals.

I also ask how well impact factors predict the future cites. With my sample, all 15 journals have a JCR impact factor in each year from 1996 to 2002.

JCR defines an impact factor for a journal as the number of cites in year t for articles published in years t-1 and t-2 divided by the number of articles published in years t-1 and t-2. This definition is a relatively short-term measure of the impact as measured by citations, and does not necessarily predict citation performance. However, to the extent that people use impact factors as measures of potential cites, they bestow a predictive power on the factors.

Table VI presents the JCR impact factors for each year, the average impact factors from 1996 to 2002, and the mean actual cites for the sample period. Table VI also presents the estimated number of cites, based on a regression with the mean cites for each journal as the dependent variable and its average impact factor as the independent variable. The regression has an adjusted [R.sup.2] of 0.859 and is significant at the 0.001 level.

There is little doubt that the mean number of journal cites is highly correlated with average impact factors. However, the actual cites for some specific journals are significantly different from the estimated cites. For example, JFQA's actual mean cites are 185% of the estimated cites and FM's actual mean cites are 38% of the estimated cites. The standardized residuals for JFQA and FM are outside the 95% confidence intervals for the estimates. Even with the high [R.sup.2] for the regression model, researchers need to be careful about using it to predict cites for a specific journal.

I also note that the ranks differ between the average impact factor (IF) and the actual mean cites. JFQA, JB, JRI, JFM, FM, and JREFE differ by at least two ranks. This example shows how similar measures can provide quite different ranks and provides another example of how difficult it is to pick an "N set" of top journals.

I run a similar regression model, this time using the actual cites per article as the dependent variable for the 626 articles in the 1996 issues of the 15 journals. The model has an adjusted [R.sup.2] of 0.261, which is significant at the 0.001 level. The estimated model is similar to the previous regression model in that the estimated cites per article is equal to -0.188 + (9.7 X Average impact Factor). There is little doubt that the actual article cites are correlated with the average impact factors. However, ill look at the results from another perspective, 73.9% of the variation is unexplained. These results indicate that with this sample there is a strong positive overall relation between JCR impact factors and actual cites. However, again, the researcher should be very careful about using an impact factor's predictive ability for a specific journal and even more careful about using an impact factor's predictive ability for a specific article.

I ask, "Can the total number of cites in lesser journals or cites in journals not included on the list meet the impact test?" For example, one professor has two FM articles and each has five cites. Another professor has one RFS article with seven cites. Which one makes the larger impact on the literature? I can make the case that the two FM articles have a larger impact than does the one RFS article.

But I note that cites in some journals are not captured in the SSCI database. Should researchers consider cites in journals not captured in the SSCI database? Other types of articles may not appear on the journal lists, such as articles in the publications of federal regulators such as regional Federal Reserve Banks or publications of major trade organizations such as the Appraisal Journal. These journals may not appear on a formal list of journals, but some of these articles may be cited in the SSCI database. Should the impact of these articles be considered?

To the extent that a school is more interested in the actual impact of a professor's work than the journals in which the articles appear, the school should ask for more detailed information. In addition to a regular publication list, an evaluator may want to see a list of those journals and articles that cite a person's collection of works. Fishe (1998) examines such information in his analysis of research standards for promotion to full professor at 90 US finance departments ranked one to 96 in Borokhovich, Bricker, Brunarski, and Simkins (1995). Fishe (1998) shows that 26 of the 76 professors promoted to full professor at the finance departments rated 21 to 96 did not publish in one of the top three (JF, JFE, RFS), but many of them did have numerous SSCI cites. The number of SSCI cites per year since Ph.D. degree for the 76 professors ranged from 0.1 to 124.8 with a median of 7.0 and a 25th percentile of 3.9.

Fishe (1998) also provides the cumulative cites per year since Ph.D. degree for each professor. I combine and reorganize his data to shed further light on the subject in my paper. For each grouping of individuals by number of top three articles, the number of professors and the mean cites per year are: zero top three articles (n = 26, 6.59 cites); one top three article (n = 12, 7.88 cites); two to three top three articles (n = 15, 8.67 cites); four to five top three articles (n = 12, 8.98 cites); and more than five top three articles (n = 11,34.05 cites). (1)

One result of this analysis is that the mean number of cites per year for the different groups does not become statistically significantly different at the 0.05 level until a professor has more than five top three articles.

When I run a regression of the 76 professors using the cites/year as the dependent variable and the number of top three articles as the independent variable, I find that the adjusted [R.sup.2] is 0.357 and the model is significant at the 0.001 level. When I run the same regression by dropping the 11 observations with more than five top three articles, the adjusted [R.sup.2] drops to -0.004 and is not significant at any reasonable level of confidence.

Fishe's (1998) work illustrates that approximately one-third of 76 lull professors are able to have an impact on the literature, as measured by SSCI cites per year, without publishing even one article in JF, JFE, and RFS. These professors make an impact by publishing in other finance, economics, and business journals. Based on this study, the citation list may be a better reflection of an individual's impact on the literature than that person's publication list.

Other information that reflects a person's impact on the literature may include the relative percentage of self-cites or the number of co-authors for each article cited. If an article is newly published, then the journal ranking may be the best information on the potential impact of a work. However, if the article is three years old or older, then an evaluator should require a record of where that work has been cited. If the evaluator does not ask for this information, then he or she is more likely to accept a non-top article in a top journal as a top article, or reject a top article in a non-top journal.

Even with a complete citation record for an article, evaluators can continue to disagree. As discussed earlier in this article, an evaluator might consider many characteristics related to a citation. Where is the article cited? Is the citation a self-cite? Is the evaluator familiar with the cited or citing journals because of his or her research interests or geographic location? Is the cited work being corrected in the citing article or is the citing article extending the work of the cited article? Does the cited article provide a survey of previous literature, an empirical test of an already established theory, or an important new theoretical insight? All three article types provide information, but each evaluator might view the relative contributions differently.

Another evaluative factor that is pertinent to this article is the citation record for an article relative to its journal. Krishnan and Bricker (2004) find that JF, JFE, and RFS add significant value in terms of citation over and above inherent article quality, but JFQA and JB do not. Krishnan and Bricker's paper indicates that the number of citations may be a function of the article quality and the method of dissemination, i.e. the journal. Referring back to Table I, six citations for a JF article would put it in the bottom quartile and at 25% of the mean cites of 24.3 for that journal. Six citations for a JBF article would put it in the tipper quartile and at 130'% of the mean of 4.6 cites for that journal. Given Krishnan and Bricker's results, an evaluator might argue that when he or she controls for the journal of publication, the six cites in JBF show a higher quality than do the six cites in JF.

III. Summary and Conclusion

In this study, I rank 15 leading finance journals by the average number of Social Sciences Citation Index (SSCI) cites per articles from 1996 to January 2004 for articles published in those 15 journals in 1996. Other recent studies often use Journal Citation Reports (JCR) impact factors to rank journals. Although my study uses one year of publications, 1996, the number of citations is over a much longer period than one year as with the JCR impact factors.

I also define and analyze a "top article" (average number of cites is above the median, mean, or 90th or 95th percentiles, for a set of leading finance journals) as opposed to a "top journal article." The results show that a top three (JF, JFE, RFS) approach to identifying top articles (based on being above the mean number of 9.55 cites per article) leads to a Type I error (a top article is rejected by the decision rule) 44% of the time and a Type II error (a non-top article is accepted as a top article) 33% of the time. I provide similar information for the 15 individual journals and for different levels of citations. At the 90th and 95th percentiles, the Type I error decreases but the Type II error increases dramatically. These results demonstrate that if an evaluator is interested in identifying top articles, the Type I and II errors associated with a top three journals approach should lead him or her to look at a broader set of journals and to examine each article for its intrinsic value, rather than the general quality of its journal.

I also examine the relation between the JCR impact factors for the 15 journals and their actual number of cites. The first regression uses the mean cites per journal as the dependent variable and the average JCR impact factor (1996-2002) as the independent variable. A second regression uses the actual cites per article as the dependent variable and the average JCR impact factor as the independent variable. The results indicate that there is a strong overall positive relation between JCR impact factors and actual cites. However, I urge caution on an impact factor's predictive ability for a specific journal or a specific journal article.

Based on this study, the citation list may be a better reflection of an individual's impact on the literature than that person's publication list, and an evaluator may want a record of where that work has been cited. If the evaluator does not solicit or develop this information, then he or she is more likely to accept a non-top article in a top journal as a top article, or reject a top article in a non-top journal.

Table I. Distribution of Social Sciences Citation Index (SSCI) Cites for 1996 Journal Articles (Collected on February 7-12, 2004) The 15 journals selected for citation analyses are Journal of Finance (JF), Journal of Financial Economics (NE), Review of Financial Studies (RFS), Journal of Financial and Quantitative Analysis (JFQA), Journal of Money Credit and Banking (JMCB), Journal of Business (JB), Journal of Financial Intermediation (JFI), Journal of International Money and Finance (JIMF), Journal of Banking and Finance (JBF), Journal of Risk and Insurance (JRI), Journal of Futures Markets (JFM), Financial Management (FM), Real Estate Economics (REE), Journal of Real Estate Finance and Economics (JREFE), and Journal of Portfolio Management (JPM). They are sorted from high to low by the mean SSCI cites during the sample period. In the bottom two rows, summary citation statistics are provided for 626 articles in the 15 journals (labeled as ALL 15) and the 626 articles with self-cites excluded (labeled as ALL 15 Without Self-Cites). Self-cites are defined as any cite where one or more of the cited authors is one of the citing authors. Journal Min. 10% 20% 30% 40% 50% 60% 70% 80% 90% JF 0 3 6 9 13 19 24 27 38 54 JFE 1 6 8 10 13 18 20 27 34 45 RFS 0 3 4 5 8 11 12 18 22 39 JFQA 0 2 3 5 7 7 10 14 26 32 JMCB 0 1 2 3 4 6 7 13 15 25 JBF 1 1 2 5 8 9 9 13 15 21 JFI 0 4 4 4 5 5 7 8 15 18 JIMF 0 1 1 1 3 3 4 6 7 12 JBF 0 1 1 1 2 2 3 4 6 12 JRI 0 0 1 1 2 2 4 6 10 12 JFM 0 0 1 1 2 3 3 4 6 8 FM 0 0 0 1 1 2 3 5 8 9 REE 0 0 1 2 2 2 3 3 6 7 JREFE 0 1 1 1 1 1 2 3 4 5 JPM 0 0 0 0 0 1 1 1 2 3 All 15 0 0 1 2 3 4 6 9 14 24 All 15 0 0 1 2 2 4 5 8 13 22 Without Self- Cites % Mean Median Author Without Without Std. No. of Self- Self- Self- Journal Max Mean Error Articles Cites Cites Cites JF 158 24.3 2.95 69 6.9 22.6 18 JFE 95 23.1 2.98 47 6.1 21.7 18 RFS 77 16.0 2.93 37 9.8 14.4 10 JFQA 53 13.2 2.55 29 5.2 12.5 7 JMCB 86 10.4 1.88 55 8.4 9.5 5 JBF 34 9.8 1.88 19 13.3 8.5 7 JFI 22 8.2 1.62 15 8.2 7.5 5 JIMF 66 5.9 1.40 52 9.9 5.3 3 JBF 30 4.6 0.58 89 11.1 4.1 2 JRI 14 4.4 0.81 29 12.4 3.9 2 JFM 26 3.7 0.67 47 8.7 3.4 3 FM 12 3.5 0.67 32 6.2 3.3 2 REE 9 3.0 0.49 28 7.1 2.8 2 JREFE 19 2.6 0.58 34 22.3 2.0 1 JPM 11 1.3 0.33 44 4.0 1.2 1 All 15 158 9.55 0.59 626 8.1 8.78 4 All 15 151 8.78 0.56 626 Without Self- Cites Table II. Type II Error is Accept a Non-Top Article as a Top Article When the Article Does Not Meet the Category Criterion (ANTA) The 15 journals selected are Journal of Finance (JF), Journal of Financial Economics (JFE), Review of Financial Studies (RFS), Journal of Financial and Quantitative Analysis (JFQA), Journal of Money Credit and Banking (JMCB), Journal of Business (JB), Journal of Financial Intermediation (JFI), Journal of International Money and Finance (JIMF), Journal of Banking and Finance (JBF), Journal of Risk and Insurance (JRI), Journal of Futures Markets (JFM), Financial Management (FM), Real Estate Economics (REE), Journal of Real Estate Finance and Economics (JREFE), and Journal of Portfolio Management (JPM). The last row includes summary statistics for 626 articles in the 15 journals (labeled as ALL 15). The four criteria for a Type II Error are the percentage of articles in the identified journal not having cites above: 1) the median of 4 cites, 2) the mean of 9.55 cites, 3) the 90th percentile of 24 cites, and 4) the 95th percentile of 35 cites. The criteria measures are based on the statistics for 626 articles in the 15 journals. Type II Type II Error % Error % Journal (Accept Journal (Accept Journal % > All Non- % > All Non- % > All Median Top Mean Above 90%ile (4 50% (9.55 Average (24 Journal Cites) Article) Cites) Article) Cites) JF 86 14 70 30 35 JFE 94 6 74 26 38 RFS 73 27 54 46 16 JFQA 72 28 45 55 21 JMCB 53 47 36 64 11 JB 74 26 37 63 5 JFI 67 33 27 73 0 JIMF 38 62 15 85 4 JBF 30 70 17 83 1 JRI 34 66 21 79 0 JFM 28 72 6 94 2 FM 31 69 9 91 0 REE 21 79 0 100 0 JREFE 15 85 3 97 0 JPM 4 96 2 98 0 All 15 (N 47 53 29 71 10 = 626 Articles) Type II Error % Type II (Accept Journal Error % Non- % > All (Accept Top 95%ile Non- 10% (35 Top 5% Journal Article) Cites) Article) JF 65 23 77 JFE 62 17 83 RFS 84 11 89 JFQA 79 7 93 JMCB 89 5 95 JB 95 0 100 JFI 100 0 100 JIMF 96 2 98 JBF 99 0 100 JRI 100 0 100 JFM 98 0 100 FM 100 0 100 REE 100 0 100 JREFE 100 0 100 JPM 100 0 100 All 15 (N 90 5 95 = 626 Articles) Table III. Cumulative Type II Error (Accept a Non-Top Article as a Top Article (ANTA) When the Article Does Not Meet the Category Criterion of Top N Journals) The journals [Journal of Finance (JF), Journal of Financial Economics (JFE), Review of Financial Studies (RFS), Journal of Financial and Quantitative Analysis (JFQA), Journal of Money Credit and Banking (JMCB), Journal of Business (JB), Journal of Financial Intermediation (JFI), Journal of International Money and Finance (JIMF), Journal of Banking and Finance (JBF), Journal of Risk and Insurance (JRI), Journal of Futures Markets (JFM), Financial Management (FM), Real Estate Economics (REE), Journal of Real Estate Finance and Economics (JREFE), and Journal of Portfolio Management (JPM)] are cumulated from the top down. The four criteria for a Type II Error are the percentage of articles in the cumulative set of journals not having cites above: 1) the median of 4 cites, 2) the mean of 9.55 cites, 3) the 90th percentile of 24 cites, and 4) the 95th percentile of 35 cites which are based on the statistics for 626 articles in the 15 journals. For example, for the Top 3 journals (JF, JFE, RFS), 33% of the articles in those journals did not have the cites above the mean of 9.55 cites. Cumulative Cumulative % [less than or % [less than or Top N equal to] Median equal to] Median Journal Journals (4 Cites) (9.55 Cites) JF 1 14 30 JFE 2 11 28 RFS 3 15 33 JFQA 4 17 36 JMCB 5 24 43 JB 6 24 44 JFI 7 25 46 JIMF 8 31 52 JBF 9 39 59 JRI 10 41 60 JFM 11 44 63 FM 12 45 65 REE 13 47 67 JREFE 14 49 69 JPM 15 53 71 Cumulative Cumulative % [less than or % [less than or equal or] 90%ile equal or] 95%ile Journal (24 Cites) (35 Cites) JF 65 77 JFE 64 79 RFS 69 82 JFQA 70 84 JMCB 75 86 JB 76 87 JFI 77 88 JIMF 80 89 JBF 84 92 JRI 85 92 JFM 87 93 FM 88 93 REE 88 94 JREFE 89 94 JPM 90 95 Table IV. Cumulative Type I Error: Reject a Top Article as a Non-Top Article When the Article Meets the Category Criterion (RTA) But Is Not Published in the Top N Journals The journals [Journal of Finance (JF), Journal of Financial Economics (JFE), Review of Financial Studies (RFS), Journal of Financial and Quantitative Analysis (JFQA), Journal of Money Credit and Banking (JMCB), Journal of Business (JB), Journal of Financial Intermediation (JFI), Journal of International Money and Finance (JIMF), Journal of Banking and Finance (JBF), Journal of Risk and Insurance (JRI), Journal of Futures Markets (JFM), Financial Management (FM), Real Estate Economics (REE), Journal of Real Estate Finance and Economics (JREFE), and Journal of Portfolio Management (JPM)] are cumulated from the top down. The four criteria for a Type I Error are the percentage of articles not in the cumulative set of journals having cites above: 1) the median of 4 cites, 2) the mean of 9.55 cites, 3) the 90th percentile of 24 cites, and 4) the 95th percentile of 35 cites which are based on the statistics for 626 articles in the 15 journals. For example, for the Top 3 journals (JF, JFE, RFS), 44% of the articles that have cites above the mean of 9.55 cites are not in those three journals. Cumulative Cumulative Top N % > Median % > Median Journals Journals (4 Cites) (9.55 Cites) JF 1 80 74 JFE 2 65 55 RFS 3 56 44 JFQA 4 49 37 JMCB 5 39 26 JB 6 35 22 JFI 7 31 20 JIMF 8 25 16 JBF 9 15 8 JRI 10 12 4 JFM 11 8 3 FM 12 4 1 REE 13 2 1 JREFE 14 1 1 JPM 15 0 0 Cumulative Cumulative % > 90%ile % > 95%ile Journals (24 Cites) (35 Cites) JF 63 53 JFE 35 29 RFS 26 18 JFQA 17 12 JMCB 8 3 JB 6 3 JFI 6 3 JIMF 3 0 JBF 2 0 JRI 2 0 JFM 0 0 FM 0 0 REE 0 0 JREFE 0 0 JPM 0 0 Table V. Combined Reject Top Article (RTA) Error and Accept Non-Top Journal (ANTA) Error: Cumulative Basis by Top N Journals The journals [Journal of Finance (JF), Journal of Financial Economics (JFE), Review of Financial Studies (RFS), Journal of Financial and Quantitative Analysis (JFQA), Journal of Money Credit and Banking (JMCB), Journal of Business (JB), Journal of Financial Intermediation (JFI), Journal of International Money and Finance (JIMF), Journal of Banking and Finance (JBF), Journal of Risk and Insurance (JRI), Journal of Futures Markets (JFM), Financial Management (FM), Real Estate Economics (REE), Journal of Real Estate Finance and Economics (JREFE), and Journal of Portfolio Management (JPM)] are cumulated from the top down. The four criteria for an RTA (Type 1) Error are the percentage of total articles not in the cumulative set of journals but having cites above: 1) the median of 4 cites, 2) the mean of 9.55 cites, 3) the 90th percentile of 24 cites, and 4) the 95th percentile of 35 cites which are based on the statistics for 626 articles in the 15 journals. The four criteria for an ANTA (Type II) Error are the percentage of articles in the cumulative set of journals not having cites above: 1) the median of 4 cites, 2) the mean of 9.55 cites, 3) the 90th percentile of 24 cites, and 4) the 95th percentile of 35 cites. For example, for the Top 3 journals (JF, JFE, RFS), there is a combined error rate of 77% (44% RTA error plus 33% ANTA error) for the criterion of > 9.55 cites. Cumulative Cumulative Error % Error % Top N Median Mean (> 9.55 Journal Journals (> 4 Cites) Cites) JF 1 94 104 JFE 2 76 83 RFS 3 71 77 JFQA 4 66 73 JMCB 5 63 69 JB 6 59 66 JFI 7 56 66 JIMF 8 56 68 JBF 9 54 67 JRI 10 53 64 JFM 11 52 66 FM 12 49 66 REE 13 49 68 JREFE 14 50 70 JPM 15 53 71 Cumulative Cumulative Error % Error 90%ile (> 24 95%ile (> 35 Journal Cites) Cites) JF 128 130 JFE 99 108 RFS 95 100 JFQA 87 96 JMCB 83 89 JB 82 90 JFI 83 91 JIMF 83 89 JBF 86 92 JRI 87 92 JFM 87 93 FM 88 93 REE 88 94 JREFE 89 94 JPM 90 95 Table VI. Journal Citation Reports (JCR) Impact Factors, Actual, and Estimated Cites The JCR impact factor (IF) for a journal is the number of cites in year t for articles published in years t-1 and t-2 divided by the number of articles published in years t-1 and t-2. The journals are Journal of Finance (JF), Journal of Financial Economics (JFE), Review of Financial Studies (RFS), Journal of Financial and Quantitative Analysis (JFQA), Journal of Money Credit and Banking (JMCB), Journal of Business (JB), Journal of Financial Intermediation (JFI), Journal of International Money and Finance (JIMF), Journal of Banking and Finance (JBF), Journal of Risk and Insurance (JRI), Journal of Futures Markets (JFM), Financial Management (FM), Real Estate Economics (REE), Journal of Real Estate Finance and Economics (JREFE), and Journal of Portfolio Management (JPM). Estimated Cites are based on a regression model: Estimated Cites = -0.268 + 9.772 (Average IF). Model adjusted [R.sup.2] = 0.859, which is significant at the .001 level. Journal 2002 IF 2001 IF 2000 IF 1999 IF 1998 IF 1997 IF JF 3.494 2.958 2.753 2.646 2.137 2.173 JFE 3.248 2.577 1.904 1.705 1.767 2.506 RFS 1.851 1.671 1.343 1.452 1.014 1.329 JFQA 1.259 0.904 0.596 0.540 0.727 0.694 JMCB 0.682 0.768 0.915 1.057 1.115 0.843 JB 1.727 1.357 1.162 0.889 1.164 1.410 JFI 0.929 1.536 0.519 0.444 0.852 0.774 JIMF 0.565 0.689 0.394 0.560 0.835 0.573 JBF 0.688 0.766 0.533 0.664 0.465 0.351 JRI 0.370 0.196 0.554 0.268 0.421 0.517 JFM 0.250 0.364 0.337 0.312 0.380 0.281 FM 1.205 0.741 0.238 1.500 0.883 1.119 REE 0.288 0.679 0.764 0.386 0.281 0.469 JREFE 0.437 0.629 0.676 0.485 0.588 0.456 JPM 0.333 0.215 0.253 0.411 0.213 0.305 Average IF Rank, Actual Estimated 1996-2002 Cites Mean Mean Journal 1996 IF Average IF Rank Cites Cites JF 2.123 2.612 1, 1 24.3 25.26 JFE 2.609 2.331 2, 2 23.1 22.51 RFS 1.129 1.398 3, 3 16.0 13.40 JFQA 0.591 0.759 8, 4 13.2 7.15 JMCB 0.586 0.852 6, 5 10.4 8.06 JB 0.775 1.212 4, 6 9.8 11.58 JFI 0.557 0.802 7, 7 8.2 7.56 JIMF 0.494 0.587 9, 8 5.9 5.47 JBF 0.469 0.562 10, 9 4.6 5.23 JRI 0.483 0.401 13, 10 4.4 3.65 JFM 0.389 0.330 14, 11 3.7 2.96 FM 1.145 0.976 5, 12 3.5 9.27 REE 0.557 0.489 12, 13 3.0 4.51 JREFE 0.384 0.522 11, 14 2.6 4.83 JPM 0.295 0.289 15, 15 1.3 2.56

The author thanks Pat Fishe, Melissa Frye and Drew Winters very useful comments. The article benefited greatly from comments by Lemma Senbet and Alex Triantis (the Editors) and two anonymous referees.

(1) Fishe (1998) also provides similar information on 51 full professors in finance programs ranked 1-20. The average number of top three articles was 6.45 and the mean cites per year was 37.5 cites.

References

Alexander, J.C. and R.H. Mabry, 1994, "Relative Significance of Journals, Authors, and Articles Cited in Financial Research," Journal of Finance 49, 697-712.

Arnold, T., A.W. Butler, T.F. Crack, and A. Altintig, 2003, "Impact: What Influences Finance Research?" Journal of Business 76, 343-362.

Borde, S.F., J.M. Cheney, and J. Madura, 1999, "A Note on Perceptions of Finance Journal Quality," Review of Quantitative Finance and Accounting 12, 89-96.

Borokhovich, K.A., R.J. Bricker, K.R. Brunarski, and B.J. Simkins, 1995, "Finance Research Productivity and Influence," Journal of Finance 50, 1691-1717.

Borokhovich, K.A., R.J. Bricker, and B.J. Simkins, 2000, "An Analysis of Finance Journal Impact Factors," Journal of Finance 55, 1457-1469.

Chan, K.C., 2001, "A Citation-based Ranking of Journals in Financial Research: Some New Results," Journal of Financial Education 27, 36-52.

Chan, K.C., C.R. Chen, and T.L. Steiner, 2002, "Production in the Finance Literature, Institutional Reputation, and Labor Mobility in Academia: A Global Perspective," Financial Management, 31, 131-156.

Chan, K.C. and R.C.W. Fok, 2003, "Membership on Editorial Boards and Finance Department Rankings," Journal of Financial Research 26, 405-420.

Chan, K.C., R.C.W. Fok, and M. Pan, 2000, "Citation-Based Finance Journal Rankings: An Update," Financial Practice and Education 10, 132-141.

Christoffersen, S., F. Englander, A.C. Arize, and J. Malindretos, 2001, "Sub-Field Specific Rankings of Finance Journals," Journal of Financial Education 27, 37-49.

Chung, K.H., R.A.K. Cox, and J.B. Mitchell, 2001, "Citation Patterns in the Finance Literature," Financial Management 30, 99-118.

Coe, R.K. and I. Weinstock, 1983, "Evaluating the Finance Journals: The Department Chairperson's Perspective," Journal of Financial Research 6, 345-349.

Fishe, R.P.H., 1998, "What Are the Research Standards for Full Professor of Finance?" Journal of Finance 53, 1053-1079.

Krishnan, C.N.V. and Robert Bricker, 2004, "Top Finance Journals: Do They Add Value?" Journal of Economics and Finance (Forthcoming).

Niemi, A.W. Jr., 1987, "Institutional Contributions to the Leading Finance Journals, 1975-1986: A Note," Journal of Finance 42, 1389-1397.

Oltheten, E., V. Theoharakis, and N.G. Travlos, 2003, "Faculty Perceptions and Readership Patterns of Finance Journals: A Global View," Journal of Financial and Quantitative Analysis (Forthcoming).

Stanley D. Smith, Stanley D. Smith is the SunTrust Chair of Banking and a Professor of Finance at the University of Central Florida in Orlando, FL.

Printer friendly Cite/link Email Feedback | |

Author: | Smith, Stanley D. |
---|---|

Publication: | Financial Management |

Date: | Dec 22, 2004 |

Words: | 9301 |

Previous Article: | Cross-country determinants of capital structure choice: a survey of European firms. |

Next Article: | Agency costs of overvalued equity. |