Printer Friendly

Research impact assessment.

RESEARCH IMPACT ASSESSMENT is a recent 196-page paper that describes the practice of federal research impact evaluation. The following sections summarize the paper's contents, with references omitted.(1)


Peer review of research represents evaluation by experts in the field and is the method of choice in practice in the U.S. Peer review problems are described and are shown to derive from the subjectivity and conflicts inherent in a group of peers. The minimum conditions required for a high quality peer review are presented. Assuming these considerations have been taken into account, three of the most important intangible factors for a successful peer review are shown to be: motivation of the review leader; competence of each team member for the purpose selected and competence of the team as a unit to address the many facets of research impact; and independence of the team members from conflicts and biases.

Peer Review of Proposed Programs

The paper describes selected state of the art peer review approaches used by research-sponsoring organizations to evaluate proposed research, and pays special emphasis to research impact incorporation into the review process. The first case study is the review of proposals by the National Science Foundation (NSF) with use of external experts; the second case study is the two-timed review of proposals by the National Institutes of Health, where proposals are reviewed initially by a peer review group (composed mainly of active researchers at colleges and universities) for scientific and technical merit, and are then sent to a statutorily mandated advisory council for a program relevance review; the final case is use of external experts in the Office of Naval Research's competitive process (among internal Navy organizations) to help evaluate accelerated program proposals.

Peer Review of Existing Programs

The paper focuses on selected state of the art peer review approaches and on how research impact is incorporated into the peer review of ongoing research. The first case study is the U.S. Department of Energy's (DOE) classic scoring-based review of its Office of Basic Energy Sciences (BES) that was done in 1981 using forty panels of experts, and the evolution of that approach into present DOE practice. Of particular note is a regression analysis of the forty panels' scores, which showed that the evaluation criteria Team Quality, Scientific Approach, and Impact on Mission were the most influential in predicting final project scores.

The second case study focuses on the ONR methods uses to review extramural and intramural programs. The third case study relates to the two-tier annual review of the National Institute of Standards and Technology (NIST) by a Visiting Committee that evaluates general policy and management, and by the National Academy of Sciences (NAS) that uses seventeen panels of experts on-site to perform a detailed technical evaluation of programs. The final case study is the annual review of the DOE national laboratories by the field offices, in which most input seems to come from the sponsors, and estimations of research impact appear to derive from the DOE program advisory committees and peer review assessments.

To summarize the peer review section, the methods described include criteria that address the impact of research on its own and allied fields, as well as on the mission of the sponsoring organization. The most intensive uses of peer review are the NSF/NIH processes for assessing proposals and the NAS annual review of NIST. Generally, nearer-term research impacts are more important in the review outcome than longer-term impacts, but less important than team quality, research approach, or research merit.


Semiquantitative methods make minimal use of mathematical tools but draw on documented approaches and results. In practice, some of these approaches (namely, studies of accomplishments resulting from sponsored research programs) are widely used by the research-sponsoring organizations. Three semiquantitative methods are discussed: Projects Hindsight and TRACES (retrospective approaches), and accomplishments books.

Project Hindsight was a retrospective study by the Defense Department that related post-World War II R&D products ("research or exploratory development |RXD~ events") to weapons systems, but did not consider those (mainly basic) research events that occurred twenty or more years before system implementation. Another limitation of the study derived from defining the RXDs as identifiable advances that impact the final system directly, effectively ignoring the potentially large cumulative indirect impacts of basic research.

Project TRACES was a similar approach by the NSF to trace retrospectively key events that had led to major technological innovations. Because the TRACES time frame extended back sufficiently far to include many basic research results, the study showed that nonmission research provided the origins from which science and technology could advance toward innovations, but appeared completed prior to the conception of the innovation to which it would ultimately contribute. A modern version of the TRACES study added citation analyses to provide an independent measure of the impact of the key papers (papers associated with each key "event").

Accomplishments books, showing detailed scientific accomplishments and their (usually non-quantitative) impacts, were described for ONR, the Air Force Office of Scientific Research, and the DOE. The approach taken by the Office of Energy Research, DOE, for one of its component organizations, the Office of Health and Environmental Research (OHER), was to track impacts and (sometimes quantified) benefits of selected research areas over the forty-year history of OHER. This example illuminated the difficulty of quantifying benefits and allocating costs and benefits to the multiple performers and sponsors over time.

Hindsight, TRACES, and the DOE accomplishments books used a historiographic approach and looked for significant research or development events in the metamorphosis of research programs in their evolution to products. They attempted to convince the reader that: (1) the significant R&D events in the development of the product or process were the ones identified; (2) typically, the organization sponsoring the study was responsible for some of the (critical) significant events; (3) the final product or process to which these events contributed was important; and (4) while the R&D costs were not quantified, and the benefits (typically) were not quantified, the research and development were worth the cost.


Problems with bibliometrics, which help limit their use in federal research impact studies, are described, and can be summarized: (1) Number counts of publications and citations can have multiple interpretations; and (2) journal publications are used differently as modes of communication in different technical disciplines. Two macroscale bibliometric studies, which characterize science activity at the national, international, and discipline level, are presented: (1) the biennial NSF Science and Engineering Indicators report, which tabulates data on characteristics of personnel in science, funds spent, publications and citations by country and field, and many other bibliometric indicators; and (2) a study at the national level aimed at evaluating the comparative international standing of British science. The lack of influence of such studies on policy or decisionmaking, i.e., the allocation of resources, is discussed.

One exception to the limited federal use of microscale bibliometric studies for evaluating programs and influencing research planning reported in the literature has been the NIH. The NIH bibliometric-based evaluations discussed in the paper include the effectiveness of various research support mechanisms and training programs, the publication performance of the different institutes, the responsiveness of the research programs to their congressional mandate, and the comparative productivity of NIH-sponsored research and similar international programs.

Patent citation analysis studies, which are used little now by the federal government but which have the potential to provide insight to the conversion of science to technology, are described. Their limitations are similar to those of journal publication bibliometrics, and depend on how well patents represent technology for different fields.

Co-occurrence techniques, which analyze phenomena that occur together frequently, have the potential to assess research impact by generating maps of evolving scientific fields and examining the changes in field structures. These computer-based techniques tend to focus more on macroscale impacts of research, but they have little federal use presently. Examples of co-citation, co-nomination, and especially co-word analysis are presented.

A comprehensive survey described in the paper examined the application of economic measures to the return on R&D as an investment in individual industries and at the national level. This document concluded that, while econometric methods have been useful for tracking private R&D investment within industries, the methods failed to produce consistent and useful results when applied to federal R&D support. An example of classical cost-benefit analysis applied to fusion-driven fission reactors is presented in the paper. More modern economic analyses relating productivity increases to amount of research performed, as well as studies that related academic research and technological innovations and then estimated the social rate of return from academic research are described and critiqued in detail.

These cost-benefit analyses appear to have limited accuracy and credibility when applied to basic research because of the quality of both the cost and benefit data due to the large uncertainties characteristic of the research process, as well as selection of a credible origin of time for the computations. As Prof. Averch stated: "In general, economists compute high social rates-of-return to most kinds of research. The rates, in fact, are usually much higher than those computed for other kinds of public investment. So there is a puzzle as to why research investments do not increase until their marginal return just equals returns from other public investments."


Research impact evaluations are of potential use to sponsors in: deciding whether to continue or end a research program or to increase or decrease its budget; changing the program, or its management, to improve the probability of success; altering policies regarding the procurement, conduct, or management of research; and/or building support with policy makers and other constituencies of the program. While peer review in its broadest sense is the most widely used method in research selection, review, and ex-post assessment, it has its deficiencies, and no single method provides a complete impact evaluation. The greater the variety of measures used to evaluate research impact, the greater is the likelihood of converging to an accurate understanding of the knowledge produced by research. However, there is little evidence of multiple technique use by the federal government in impact assessment, especially bibliometrics, to support peer review. This area is ripe for exploitation.

Ronald N. Kostoff is Director, Technology Assessment, Office of The Chief of Naval Research, Department of the Navy, Arlington, VA. The views expressed in all of these papers are solely those of the author and do not necessarily represent the views of the Department of the Navy or the Office of Naval Research.


1 Readers interested in obtaining the paper or an abridged sixteen page, forty-three reference version should contact Dr. Ronald N. Kostoff, Director, Technical Assessment, Office of Naval Research, 800 N. Quincy St., Arlington, VA 22217 (FAX: 703-696-0934; PH: 703-696-4484).
COPYRIGHT 1993 The National Association for Business Economists
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:methods
Author:Kostoff, Ronald N.
Publication:Business Economics
Date:Jan 1, 1993
Previous Article:Consensus forecasts in planning.
Next Article:The business economist at work: choices taken along the highway.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters