Printer Friendly

Leading contributors to insurance research.

Abstract

This study provides a comprehensive analysis of the research literature in the area of risk and insurance and assesses the total productivity of individual authors, their employers, and the institutions granting authors' terminal degrees. The study sample encompasses all original articles and notes published in 22 leading research journals from 1976 through 1986. The productivity measures impound both quantitative raw measures and peer perceptions of journal quality. The results reveal evidence of dynamic change and broadening participation among contributors to the risk and insurance research literature.

Introduction

During the past decade, research in the behavioral sciences has emerged that provides empirical analysis of the research literature itself. The various business and economics disciplines should represent no exception to this practice of measuring the quality of research journals and the productivity of individual authors and institutions. The latter, specific to the area of risk and insurance, serves as the focus of this study.

One of the more obvious reasons for analyzing research productivity is the acquisition of information by academic administrators and faculty colleagues involved in decisions regarding pay, promotion, and resource allocation. Additional reasons include the provision of (1) important information about research-oriented institutions that may aid decision-making by prospective and current graduate students, faculty considering relocation, and accreditation administrators, (2) evidence of dynamic changes in research performance over time, and (3) intrinsic historical value in the documentation of leading authors and institutions.

This study encompasses a comprehensive analysis of the research literature in risk and insurance and an assessment of the total productivity of institutions and individuals contributing to the discipline. The ultimate goal is to close an apparent information gap and, in doing so, to fulfill the objectives implied above. Relevant research literature is discussed in the next section. Subsequent sections describe the research design and results of the study. The final section provides a brief summary and conclusions.

Previous Literature

Most investigations of the business and economics research literature focus upon the total productivity of the institutions employing authors who publish research articles (Laband, 1985; Moore and Taylor, 1980; Niemi, 1987; and Williams, 1987). Recent articles in the finance literature have broadened the scope of research "contributors" to include individual authors and the academic institutions granting authors their highest degrees (Ederington, 1979; Heck, Cooley, and Hubbard, 1986).

The research productivity of the various types of contributors is measured in a number of ways, but the basis for measurement normally involves counting of (1) articles published, (2) number of pages published, or (3) citations of articles published. These items are counted for a selected journal or group of journals perceived to be leading publications in the discipline of interest. Examples of total productivity measurement in the finance and economics literature are briefly discussed below.

Heck, Cooley, and Hubbard (1986) count articles and notes published in a single journal, The Journal of Finance, from 1946 through 1985. The counting procedure is adjusted for co-authorship by giving each of the n authors credit for a 1/n share of the article published. The co-author adjusted article count and attendant rankings are reported for all three categories of contributors.

Moore and Taylor (MT) (1980) simply count both articles and pages published between 1972 and 1978 for a sample of 15 journals representing five different business disciplines. Authors' employers are ranked based upon article counts. Separate results also are reported for three finance journals. Authors' employers are ranked by the number of articles published, although the number of pages published also is stated. Williams (1987) updates the MT study and, in addition, provides concentration ratios for the leading employers and analyzes intertemporal changes among employing institutions.

Niemi (1987) ranks employers based upon pages published from 1975 through 1986 in the same three finance journals analyzed by Williams. An adjustment for co-authorship is employed. Niemi finds evidence of significant change in the gross productivity of the leading research institutions over time and very different rankings of contributors across the three journals.

Laband (1985) advocates the counting of article citations as the best measure of the "revealed preference" of research-oriented peers. Laband counts both co-author adjusted citations and pages written for each article published from 1971 through 1983 in 27 leading economics journals. To evaluate total research productivity for a list of "top 50" economics departments, Laband reports relative rankings based upon (1) simple counts of citations and pages, (2) counts per faculty member, (3) counts per Ph.D. graduate, and (4) citation counts per page published. Comparing rankings based upon total citation and total page counts, he observes a relatively high correlation coefficient of .822.

Ederington (1979) uses both citation and page counts to measure productivity based upon articles published in two leading finance journals. He finds that both citation and page counts reveal similar evidence about institutional concentration in financial research, although relative rankings between individual institutions may vary. Multiple regression analysis confirms a strong positive relationship between the number of citations and article page length. Another result indicates no synergistic effects for joint authorship, a finding that supports the co-author adjustments implemented by many researchers.

Chandy and Thornton (CT) (1985) supply the only analysis of risk and insurance research contributions. CT count the number of articles published in The Journal of Risk and Insurance from 1973 through 1984 and the The Journal of Insurance Issues and Practices from 1977 through 1984, without adjustment for co-authorship. Rankings of authors' employers are reported for each journal.

In this study, the contributions of individual authors, their employers, and their alma maters are examined using a variety of methods applied in the finance and economics literature. Total productivity measures, which incorporate qualitative information unique to the risk and insurance discipline, are developed and tested for a broad sample of leading insurance and related business and economics journals. The research data and methodology are detailed in the following section.

Research Design

Sample Data

The selection of leading research journals relevant to risk and insurance researchers is based upon the qualitative perceptions of insurance and finance professors as documented by Outreville and Malouin (OM) (1985). OM surveyed academic members of the American Risk and Insurance Association (ARIA), asking respondents to assign scores on a scale from zero to 20 that reflect their perceptions of quality for a large group of research journals. The mean quality ratings then were weighted by the frequency of citation for each journal to derive an impact score. [1] OM report mean impact scores for four categories of respondents segregated by whether they are employed in a Ph.D.-granting department and whether they teach a combination of finance and insurance or insurance only.

The study by OM provides impact scores for 23 journals rated by both groups of respondents teaching in Ph.D.-granting programs. The current study uses only the impact scores from these respondents. This policy reflects a presumption that, as a class, the faculty teaching in Ph.D.-granting programs are most familiar with the research-oriented journals from both a research and a teaching perspective.

Forty-six individuals teaching in Ph.D.-granting schools responded to the OM survey. According to ARIA, 19 institutions offer a doctoral degree in risk and insurance (American Risk and Insurance Association, 1986). Although the sample of 46 respondents may provide an adequate cross-section of opinion, OM do not supply information regarding respondents' specific employers or the proportion teaching in institutions offering Ph.D. degrees in risk and insurance. Hence, the impact scores conceivably could be biased toward the responses of faculty in larger programs and/or faculty teaching at institutions that grant Ph.D. degrees, but not specifically in risk and insurance. For this study, a weighted average impact score is calculated for each journal with the weights set proportionally to the number of respondents in each of the two groups teaching in Ph.D. programs.

Data were collected from the 1976 through 1986 volumes of 22 research journals shown in Table 1. The weighted average impact score for each journal also is shown. One of the 23 journals originally considered, Transactions of the Society of Actuaries (TSA), would have ranked ninth in terms of weighted impact score, but was eliminated from the study sample. Authors' employers are not identified by the TSA, making collection costs prohibitive. The exclusion of the TSA may introduce some bias against those authors primarily engaged in actuarial research. Had the TSA volumes been included in the sample, however, an unfair bias in favor of actuarial researchers may have resulted. This potential bias is attributable to the fact that only members of the Society of Actuaries are allowed to publish articles in TSA. The 22 journals in the study sample have no such exclusive policies, although submission fees may vary depending upon whether submitting authors are members of an association or society sponsoring the journal. [2]

All articles and notes in the listed journals devoted exclusively to risk and insurance issues are included in the sample data set. For the finance and economics journals listed in Table 1, more careful screening is necessary. Articles and notes focusing on pure risk issues are included, but studies on speculative, or investment, risk are excluded. In addition, a significant number of articles appearing in the finance and economics journals are related to the insuring of investment or savings contracts. These topics include federal insurance for financial institutions, municipal bond insurance, and mutual fund insurance. All articles of this type are included in the sample set. For all the journals surveyed, published comments, authors' replies, abstracts, book reviews and the like are excluded from the data base.

Gathering information on the institutions granting authors' terminal degrees poses a relatively difficult problem to researchers focusing on the area of risk and insurance. Heck, Cooley, and Hubbard (HCH) (1986) used the Comprehensive Dissertation Index (CDI) and document the schools granting degrees to 85.6 percent of the authors publishing articles in the Journal of Finance. In this research, a lower identification rate is to be expected because a broader range of journals is surveyed. The degree sources for non-U.S. authors and authors with terminal law degrees are particularly difficult to identify. Based upon information from the CDI and the biographical information contained in the journals, 65.2 percent of the authors' alma maters are identified.

Methodology

The raw measures used for estimating the productivity of research contributors are the number of articles and the number of pages published in the 22 sample journals. These raw measures then are adjusted for co-authorship in the strictly proportional manner used by HCH and others. Both of the raw measures are solely quantitative and may not adequately reflect the quality of research contributed to the risk and insurance literature. In previous research, the approach most frequently suggested to introduce qualitative peer perception is the use of citation data. Using such data is not possible in this study for the following reasons.

The Social Sciences Citation Index (SSCI) is the source used by researchers in previous studies of the finance and economics literature to determine the frequency of article citation in major research journals. A wide spectrum of problems, including inherent biases against second and third authors, are likely to emerge when using SSCI data. Other problems are summarized by Ederington [1979, pp. 778-791 and Laband [1985, pp. 218-19].

The insurmountable problem in using citations for the purpose of this research is the paucity of risk and insurance journals indexed in the SSCI. Of the 22 journals in the sample, 15 are indexed in the SSCI. The seven excluded journals include six "relatively pure" insurance and actuarial journals and the Financial Analysts Journal. The only indexed journals that focus primarily upon risk and insurance research are The Journal of Risk and Insurance and Insurance: Mathematics and Economics.

Though citations are not used, qualitative information is incorporated in this study based upon the direct perceptions of ARIA members recorded by OM. The quantitative productivity measures of articles and pages published are multiplied by the weighted average impact scores for each research journal to generate a total productivity score. Considering the high degree of correlation between number of pages published and citations, as documented by both Laband and Ederington, the productivity score incorporating pages published and journal impact scores should primarily reflect the quality of research output as perceived by the authors' peers.

Productivity scores are determined, as described previously, for individual authors, their employers, and their alma maters. These types of contributors then are ranked accordingly. Rank correlation analyses are used to determine the consistency of leading contributor performance across different measurement schemes. The productivity of leading employers and degree-granting institutions also is compared over two time subperiods to reveal any dynamic changes among the leading institutions.

Certain general limitations of the current research are evident and must be considered by the reader. First, all of the productivity measures are at least partially dependent upon measures of the quantity of published output. While qualitative weights are applied, a quantitative component is inextricably contained in the final productivity measures, prohibiting measurement of research contributions on the basis of quality only.

A second general limitation is the restriction of research output to articles and notes in leading journals. Other output, such as books, monographs, and working papers, simply are not considered.

Several limitations exist that are specific to the measurement of research contributions by employers and degree-granting institutions. For instance, no adjustments are made for faculty size. No conclusions can be drawn, therefore, related to average productivity per researcher at any single institution. As noted by Niemi (1987, p. 1390), per capita measurement is sensitive to the productivity of individual members in small departments and may not reflect an institution's overall commitment to a discipline.

The employer of record at the time a researcher publishes an article is given credit for that publication. No retroactive adjustments are made if a researcher leaves the employer of record after publication of an article. The researcher mobility factor may be important to some readers-prospective Ph.D. students, for instance-who might wish to use this study to project the future research efforts and expertise of academic institutions.

Finally, research of this type ignores the overall mission of individual academic institutions. Only the objective of research productivity is considered here, even though academic institutions and authors have other goals, particularly in the areas of teaching and service. For instance, the overall mission in terms of research, teaching, and service is likely to be quite different at a state-supported university compared to a privately funded institution benefitting from substantial endowment support.

Despite various limitations, the research design provides risk and insurance researchers with the first comprehensive measurement of the total research productivity of leading contributors to the discipline. The measures used are quantitative, but incorporate qualitative peer perceptions of the journals included in the study sample. The results are reported and discussed in the next section.

Results and Implications

Results for Leading Employers

The employers of contributing authors are ranked by scores that incorporate either the number of articles or the number of pages published, the adjustment for co-authorship, and weights for the journal impact scores from Table 1. These weighted scores for 1976 through 1986 and two subperiods, 1976 through 1981 and 1982 through 1986, are shown in Tables 2 and 3.

The six leading employers-Pennsylvania, Georgia, Georgia State, Texas, Wisconsin, and Illinois-are the same under both measurement schemes, although the rankings vary slightly. Comparing rankings based upon the weighted article scores versus the weighted page scores for the top 30 employers, a high degree of correlation exists as evidenced by a Spearman correlation coefficient of .800, which is significant at the .001 level.

Previous authors have demonstrated that page counts may impound a greater qualitative impact than do article counts. While correlation analysis indicates a high degree of relationship between the two measurement schemes, the ranking of some employers is significantly affected if the weighted page score is considered preferable to the weighted article score. If ranks are based upon pages instead of articles for the entire study period, Tables 2 and 3 indicate substantial improvements in rank for Connecticut (from 17 to 10), Southern California (from 21 to 13), Pennsylvania State (from 23 to 14), British Columbia (from 28 to 17), and North Texas (from unranked to 20). Sizeable losses of rank accrue to The American College (from 10 to 21), Arizona State (from 15 to 25), Florida State (from 16 to 27), and North Carolina-Greensboro (from 22 to unranked).

The data for the two subperiods in Tables 2 and 3 furnish information on dynamic change among employers of risk and insurance researchers. Based upon weighted page scores, the dominant position of the University of Pennsylvania appears to have increased in the early 1980s. In the first subperiod, Pennsylvania authors supplied 9.2 percent of the research contributions produced by the top 30 employers while in the second subperiod, the share increased to 15 percent.

Other employers substantially increasing productivity during the 1982 through 1986 subperiod include Illinois, Ohio State, Iowa, Minnesota, North Texas, Toronto, Venezian and Associates, Florida State, Brussels, and Chicago. The reader should realize, however, that scores for such brief subperiods are likely to be very volatile because of the lengthy lags between submission and final publication of articles in the leading research journals. The Spearman correlation coefficient comparing the weighted page scores for the first and second subperiods is .68 and indicates a significantly positive relationship at the .002 level. Squaring the correlation coefficient produces the coefficient of determination, which indicates that only 46 percent of the variation in the second subperiod rankings is explained by the earlier rankings and that dynamic change took place among contributing employers during the study period.

Following Williams (1987) and others, the percentage of gross production attributable to groups of leading employers is used to measure the extent to which research productivity is dominated by a few leading institutions. These percentages sometimes are referenced as concentration ratios. The concentration ratios estimated in this study are based upon weighted page scores for the subperiods 1976 through 1981 and 1982 through 1986 and are displayed in Table 4.

Despite the significant increase in productivity attributable to the top-ranked employer, the top five employers account for only 13.3 percent of weighted pages from 1982 through 1986 versus 17.8 percent in the previous subperiod. As the top group of employers is expanded to include more institutions, the concentration ratios decline uniformly for the more recent subperiod. This finding provides preliminary evidence that research efforts in the area of risk and insurance are becoming more widely dispersed among contributing employers.

While employer rankings are sensitive to both the specific quantitative proxies and the periods analyzed, the unique qualitative weights implemented in this study also have a substantial impact. For instance, Spearman correlation analysis indicates that only 53.2 percent of the variation in the qualitatively weighted page scores used in this study is explained by unweighted page counts. Weighting by journal impact scores adds information not obtained by simpler counting procedures.

Tables 5 and 6 provide rankings based upon simple article and page counts. Comparisons of these results with those shown in Tables 3 and 4 demonstrate the relative impact of using qualitative weighting. Comparing weighted versus unweighted article counts, the Spearman correlation coefficient is .907 for the first subperiod, but only .691 for the second subperiod. Although the top five employers remain the same if the qualitative weights are dropped, substantial changes in rank occur for other employers.

The most notable change attributable to the use of simple article counts is the improvement in rank for European academic institutions. Four European employers that were previously unranked appear in the top 30, with Katholieke University-Leuven rising to the ninth rank. The University of Brussels also improves in rank, from twenty-seventh to seventeenth. Especially high productivity is apparent in the second subperiod, which may reflect the introduction and/or expansion of several European journals, such as Insurance: Mathematics and economics, the ASTIN Bulletin, and The Geneva Papers. As explained earlier, the qualitative weights used in this study probably reflect bias against the journals favored by actuarially-oriented European authors.

Table 6 shows employer rankings based upon simple page counts. Three European universities, previously unranked based on qualitatively weighted page scores, now enter the top 30. Other schools notably benefitting from the dropping of qualitative weights include Florida State (improving from 27 to 15) and the American College (from 21 to 12). Employers losing rank when qualitative weights are ignored include South Carolina (from 15 to 27), Pennsylvania State (from 14 to 24), Temple (from 12 to 22), Minnesota (from 19 to 28), and Toronto (from 22 to unranked). The Spearman coefficients comparing weighted versus unweighted page counts are .837 for the first subperiod and .703 for the second subperiod. The use of impact scores to weight quantitative measures of total productivity has an important effect on the ranking of many leading employers.

Further analysis of the impact of certain journals on total productivity scores reveals more information on why qualitative weighting has such an effect. The impact of the leading U.S. academic journal devoted to the area of risk and insurance, The Journal of Risk and Insurance (JRI), is compared to the impact of two widely read journals that accept both applied academic research and descriptive surveys by professionals, the CPCU Journal (CPCU) and the Journal of the American Society of CLU and CHFC (CLU). When unweighted counts are used, the CPCU and CLU journals account for 30.5 percent of all articles and 20.4 percent of all pages published, while the JRI supplies 21.9 percent of articles and 27.9 percent of pages published.

When qualitative impact scores are applied, the CPCU and CLU journals furnish only 27.1 percent of weighted articles and 17.4 percent of weighted pages. In contrast, the JRI share of weighted articles and pages jumps to 34.8 percent and 42.5 percent, respectively. The qualitative weights applied in this study effectively influence total productivity scores in favor of journals targeted to academic researchers, as opposed to professionally oriented journals.

Results for Institutions Granting Terminal Degrees

Table 7 contains weighted page scores for the institutions granting terminal degrees to publishing authors. The University of Pennsylvania occupies a dominant position with graduates producing nearly 24 percent of the contributions attributable to the top 30 degree grantors during the entire study period. The total productivity of Pennsylvania graduates did decrease sharply in the second subperiod, although the hazards of using such a brief subperiod still apply, as mentioned previously.

Spearman correlation analysis shows significant, but small, correlation in the rankings of degree-granting institutions over the two subperiods. The rankings in the first subperiod explain only 15 percent of the variation in rankings for the second subperiod. Among the universities with substantially higher scores in the second subperiod are Illinois, Massachusetts Institute of Technology, Stanford, Chicago, Yale, California Institute of Technology, Oregon, New York University, North Carolina, Texas, and South Carolina. Only three of these schools have formal graduate programs in risk and insurance. The increasing presence of authors graduating from universities with nationally ranked finance and economics programs supplies further evidence of a broadening of participation in risk and insurance research.

Unweighted page and article scores also were computed and analyzed for the alma maters, but are not reported here. The resulting rankings are very similar to those based upon qualitatively weighted scores, as evidenced by Spearman correlation coefficients of .931 for the first subperiod and .944 for the second subperiod, and contain little additional information.

Concentration ratios estimated for the degree-granting institutions confirm a progressively greater dispersion of contributions by graduates. The top five alma maters account for 20.4 percent of weighted pages from 1976 through 1981, but only 10.6 percent from 1982 through 1986. The top 20 are credited with a 62.8 percent share for the former period and a 50.9 percent share for the latter. The trend toward a greater dispersion of research contributions also may be a function of the broadened participation noted previously.

Results for Leading Authors

The leading authors contributing to risk and insurance research are ranked by both weighted page scores and weighted article scores in Table 8. Although seven of the top ten authors based upon weighted page scores also are in the top ten based on weighted article scores, the specific ranking of individuals is quite sensitive to the measurement method used. Spearman correlation analysis indicates that rankings based upon weighted page scores explain only 57 percent of the variation in rankings based upon weighted article scores.

The weighted page scores are considered the preferred measurement method, so these scores and rankings are listed first in Table 8. If weighted article scores were used instead, several leading authors would drop substantially in terms of rank. These authors include Scott Harrington (from 1 to 10), William Scheel (from 9 to 21), Dan Anderson (from 12 to 22), and Leonard Freifelder (from 21 to unranked). Notable beneficiaries of a ranking based on article count would include Karl Borch (from 25 to 11), Iskandar Hamwi (from unranked to 14), and F. De Vylder (from 29 to 20). The data and correlation tests indicate that the ranking of individual authors is more sensitive to the quantitative measurement proxy than is the ranking of employers or degree-granting institutions.

The ranking of individual authors also is quite sensitive to the qualitative weights applied in this study. Table 9 furnishes data pertaining to the ranking of individual authors based on unweighted page and article counts.

When the qualitative weights are removed from page scores, three authors gain substantially in rank. These authors are F. De Vylder (from 29 to 4), Jean LeMaire (from 22 to 7), and Harvey Rubin (from 15 to 10). The first two authors have published prolifically in the European, actuarially-oriented journals. Dropping from the top ten are Michael Smith, William Scheel, and David Babbel. The use of unweighted article counts results in even greater turnover among the top ten individual contributors.

Results for Leading Contributors to The Journal of Risk and Insurance

Table 10 shows the top ten contributors to The Journal of Risk and Insurance (JRI) in all three categories based upon total pages published. The JRI rankings correspond closely to the overall rankings based on weighted page scores for all journals. Eight of the top ten contributing employers, degree-granting institutions, and individual authors in the overall rankings also are among the top ten contributors to the JRI.

The top six employers are the same as for the overall rankings, although their ranks within this group are different. South Carolina and Temple enter the top ten when only JRI pages are counted, while Harvard and Ohio State lose top ten status. The top ten employers account for slightly over 40 percent of the pages published in the JRI, indicating higher concentration levels compared to those for the entire sample of journals.

Among degree-granting institutions, the top three rankings based upon JRI pages published are identical to the top three in the overall rankings. The specific ranks of the remaining seven schools vary considerably in comparison with the overall rankings. New additions to the top ten alma maters include California Institute of Technology and Ohio State, while Georgia State and Iowa drop from the top ten.

The top four individual authors contributing to the JRI remain the same as for the overall rankings, although specific ranks again vary. Leonard Freifelder and Barry Smith enter the top ten based upon JRI pages published, while Robert Hershbarger and Yehuda Kahane drop out of this group. The JRI rankings for individual authors are particularly sensitive to the period observed. For instance, one additional ten-page article could improve an individual author's ranking by as many as three positions.

Conclusions

The comprehensive analysis of risk and insurance research literature conducted in this study supplies extensive, new information about the contributions of leading authors and institutions. The top group of employers contributing to the literature is relatively stable over different measurement schemes, but rankings vary over time. A broadening of participation among both employing and degree-granting institutions is apparent in more recent years. This trend may be partially attributable to increasing research in the area of risk and insurance by authors trained or employed by institutions not offering a risk and insurance curriculum. The overall rankings based upon all 22 journals surveyed and the rankings based upon contributions to The Journal of Risk and Insurance correspond closely for the top authors and institutions.

In addition to reporting historical data, this study provides certain insights into the measurement of research contributions. The use of different raw measurement units, specifically page counts versus article counts, has a substantial effect upon some rankings of contributors. The ranking of individual authors is especially affected by the quantitative measure chosen.

Journal impact scores based upon peer researchers' perceptions of journals were used as weights applied to the raw quantitative measures of page and article counts. For the study sample, qualitative weighting reveals information not impounded by simple counting procedures. The qualitative weighting of raw measures effectively favors publications traditionally oriented toward an academic, rather than a professional, audience. Employer rankings are somewhat sensitive and individual author rankings are quite sensitive to the qualitative weights applied in this study.

Analysis of total productivity over two periods discloses evidence of dynamic change among the leading contributors to the risk and insurance literature. The rankings of both contributing authors and institutions granting degrees to these authors are particularly sensitive to the period selected for analysis.

The current study indicates the need for additional research related to the measurement of research productivity. Future studies may expand the definition of research production to include books, monographs, and working papers. Analysis of per capita productivity at the leading institutions also would add to the current body of information. A comparison of peer perceptions as measured via citation counts versus opinion survey data would be useful for developing better qualitative measures of research productivity.

The issue of dynamic change and broadened participation in risk and insurance research deserves more analysis. Do these observed trends indicate increased interest among researchers outside the confines of the traditional risk and insurance discipline? Or does the greater dispersion of research productivity reflect a de-emphasis of research and education in risk and insurance at some of the leading institutions? Does the lower concentration of total productivity by the leading institutions reflect decreased productivity on a per capita basis, increasing rigor of the leading journals devoted to risk and insurance issues, or accelerated productivity of previously unranked institutions? The current study provides many answers to questions regarding risk and insurance research, but the analysis and results raise even more questions for interested parties to investigate.

1 The impact score simply allows equal weighting of both a direct and an indirect measure of journal quality. As explained by OM, previous research indicates that journals cited more frequently also are perceived to be of higher quality.

2 Respondents' quality perceptions and, therefore, impact scores are specific to the OM survey period. Journals beginning publication in the 1970's and 1980's were subject to relatively low levels of awareness, which may be reflected in a lower impact score than would be applicable if the survey were replicated today. Some examples of refereed journals possibly affected by the time specificity problem include Benefits Quarterly, Insurance: Mathematics and Economics, Journal of Insurance Regulation, and Journal of Financial Services Research.

(Tables and other figures omitted)

References

1. American Risk and Insurance Association, 1986, Graduate Study in Risk and Insurance, (Orlando, Florida: University of Central Florida).

2. Chandy, P. R. and John H. Thornton, 1985, An Analysis of Institutional Contributions to Major Insurance and Risk Management Journals and Annual Meetings with a Summary of Academic and Practitioner Preferences Among Insurance and Risk Management Publications, Paper presented at the Southern Risk and Insurance Annual Meetings.

3. Comprehensive Dissertation Index, 1861-1986, (Ann Arbor, Michigan: Xerox University Microfilms International).

4. Ederington, Louis H., 1979, Aspects of the Production of Significant Financial Research, Journal of Finance, 34: 777-86.

5. Heck, J. Louis; Philip L. Cooley; and Carl M. Hubbard, 1986, Contributing Authors and Institutions to the Journal of Finance, Journal of Finance, 61: 1129-40.

6. Laband, David N., 1985, An Evaluation of 50 Ranked' Economics Departments - By Quantity and Quality of Faculty Publications and Graduate Student Placement and Research Success, Southern Economics Journal, 52: 216-40.

7. Moore, Lawrence J. and Bernard W. Taylor, III, 1980, A Study of Institutional Publications in Business-Related Academic Journals, 1972-1978, Quarterly Review of Economics and Business, 20: 87-97.

8. Niemi, Albert W., Jr., 1987, Institutional Contributions to the Leading Finance Journals, 1975 through 1986: A Note, The Journal of Finance, 42: 1389-97.

9. Outreville, J. Francois, and Jean-Louis Malouin, 1985, What Are the Major Journals that Members of ARIA Read?, Journal of Risk and Insurance, 52: 723-33.

10. Williams, William W., 1987, Institutional Propensities to Publish in Academic Journals of Business Administration: 1979-1984, Quarterly Review of Economics and Business, 27: 77-94.

Larry A. Cox is Assistant Professor and Sandra G. Gustavson is Professor and Department Head of Insurance, Legal Studies, and Real Estate at The University of Georgia.

The authors wish to thank Ann Rudd, Carol Jordan, Ho Khang, Brenda Powell, Steve Bird, Jim Carson, and David Cather for their assistance in this project.
COPYRIGHT 1990 American Risk and Insurance Association, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1990 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Cox, Larry A.; Gustavson, Sandra G.
Publication:Journal of Risk and Insurance
Date:Jun 1, 1990
Words:5572
Previous Article:An empirical analysis of risk-related insurance premiums for the PBGC.
Next Article:Problems and solutions in conducting event studies.
Topics:

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters