Printer Friendly

Trivia pursuit; too much of America's research money goes to studies nobody wants to read.

David P Hamilton is a reporter for Science.

Too much of America's research money goes to studies nobody wants to read

Like most academic libraries, the Gelman Library at George Washington University is an impressive place if you're easily awed by thinking about the accumulated weight of human knowledge. Although the building itself, a squat seven-story slab of concrete and glass, won't win any architecture prizes, it does possess a certain solemnity. Just inside is a room filled with the modem equivalent of the card catalog-flashing terminals before which anxious students are busily compiling lists of the books they need for footnotes in their current research paper. Below, the basement holds several million pounds of government documents; above, three floors are devoted to classrooms and offices, while two more hold the library's collection of 1.5 million books. Sandwiched between offices and the stacks is the periodicals floor, which holds both the popular magazine collection and perhaps as many as 10,000 scholarly journals-the published record of the world's past and present academic research.

Just standing in the presence of so much painstakingly assembled research is humbling. But once you begin to look with a critical eye through the material kept there, some of your awe might begin to wane. Pass through the current periodical section, and you'll find titles of "scholarly" research journals like School Food Service Journal, Bee World, and The Journal of Band Research. Pick up one of these journals and actually try to read it, and you can make an even scarier discovery: that an unfortunately large percentage of what passes as the bedrock of academic achievement more closely resembles intellectual quicksand. For instance, the literature chronicling recent research in the social sciences includes the following:

"An Empirical Methodology for the Ethical Assessment of Marketing Phenomena Such as Casino Gambling" Journal of the Academy of Marketing), in which University of Detroit professor Oswald Mascarenhas explains not only that gamblers are more favorably disposed toward gambling than non-gamblers ("teleological and deontological justifications of casino gambling were decisively low"), but that people look more favorably on gambling if they think they can get rich at it ("distributive justice related conditional acceptance of casino gambling was higher").

> "Securing the Middle Ground: Reporter Fomulas in '60 Minutes' " (Critical Studies in Mass Communication), in which University of Michigan professor Richard Campbell analyzes 154 of the show's episodes and concludes that its meaning lies in "story formulas" in which reporters "construct a mythology for Middle America." Wait, there's more-"60 Minutes" has the power "to transform and deform experience, to secure a middle ground for audiences, and to build unified meanings in and for a pluralistic culture." (And you thought it was just a news program.)

> "Autonomy, Interdependence, and Social Control: NASA and the Space Shuttle Challenger" (Administrative Science Quarterly), a 32-page dissection of the NASA mistakes leading to the Challenger accident, after which Boston College professor Diane Vaughan concludes that "this case study does not generate the sort of comparative information on which definitive policy statements can be made." Those sorts of judgments, it turns out, require the systematic assembly of data on "the relationship between autonomy, interdependence, and social control in diverse types of regulatory settings." Even then, difficulties in measuring variation in autonomy and interdependence" will make policy decisions "imprecise." And "our lack of skill at converting research findings into diagnostic recommendations for organizations" will also hinder the search for concrete solutions.

It might seem unfair to pick on the social sciences, which have long suffered by comparison with the more glamorous and better-funded "hard" sciences (physical and life science, medicine, and engineering). But the reputation of social science as a haven for work that tells us 1) nothing we need to know, or 2) nothing that we didn't know already, has resulted at least in part from the accessibility of the subjects that social scientists like to study. It's easy to dismiss Richard Campbell's study of "60 Minutes" as gasbaggery if you've watched the show. It's much harder to make a similar judgment about work in the hard sciences, such as an AIDS study published a few years ago in which researchers mistakenly claimed to have found a viral cousin to HIV, unless you've spent years doing similar research yourself. Even so, there are a number of good reasons to suspect that things are about as bad in the hard sciences as in social science, even if it's harder to figure out just from reading the published studies.

For instance, although scientists like to portray themselves as inquiring, dynamic researchers who strike out wherever their curiosity leads them, in reality most of the same criticisms levied against "pack journalism" in the political world can just as easily be aimed at "pack research." In 1986, when IBM researchers in Switzerland discovered a ceramic that became superconducting at much higher temperatures than previously thought possible, material scientists all over the world piled onto the superconductivity bandwagon. Rustum Roy, a material science researcher at the Pennsylvania State University, told me about his experience as a session chairman at a meeting of the Materials Research Society: "We received 500 papers on superconductivity, and I told the conference organizers that we could eliminate 90 percent without hurting the session. They agreed, but said, Then [the authors] won't come to the meeting ! Similarly, in the field of vaccinology, researchers for the past 10 years have devoted themselves to making what are called "subunit vaccines," work that has next to nothing to do with producing vaccines for real people. "Much of this research is very pedestrian," says MIT molecular biologist Richard Young, who did vaccine development himself until he became disgusted with the field. In 10 years, only one subunit vaccine has ever protected people against disease-a failure that Young attributes to vaccinologists' reluctance to really dig into the complex immunological systems that govern the body's response to disease.

This is not to say that academics are all stodgy, incompetent researchers who do derivative research while hiding their inadequacies with specialized jargon, statistics, and a lousy command of English. Obviously, a great many scholars in fields ranging from psychology to immunology to chemistry do make significant contributions, both to scientific understanding and to society at large. But many of their colleagues are falling down on the job, or unable to do it in the first place. And what's worse is that while they're stumbling, the government is picking up most of the tab.

With the exception of a few iconoclasts, however, no one spends much time figuring out what, exactly, society reaps from its substantial investment in academic research, or whether it might be better served by distributing government resources differently. Yet such questions bear asking. The public investment in research is huge-almost $16 billion in direct federal, state, and tax-deductible industrial support. As ostensibly nonprofit organizations, universities save billions more in exemptions from taxes on patent income and property. And although the success stories of academic research are undeniable-advances in computing technology, a variety of medical treatments ranging from vaccines to cancer therapies, and a better understanding of the geophysical forces that cause earthquakes, to name a few-these successes, and the foundation of basic research that has made them possible, represent only a tiny fraction of all the research performed each year. Beyond the tip of this iceberg, we aren't even beginning to get our money's worth.

Smoke or mirrors?

You won't get far with such concerns if you approach the guardians of the research establishment, of course. "Overall, the United States has the best research universities and graduate education in the world," says Robert Rosenzweig, president of the Association of American Universities, the Washington representative for 56 American and two Canadian research universities. Rosenzweig fairly well summarizes the conventional wisdom of the American research establishment: We're doing just fine, thank you. And a quick statistical look at the nation's research effort helps explain the smugness. In 1988 there were 356 accredited "research universities" supporting about 250,000 professors in some 275 disciplines ranging from applied mathematics to sociology to art history and criticism. Just over I million students toil away in the nation's graduate schools, and 34,000 of them receive doctoral degrees each year.

For Rosenzweig and others, there's just one fly in the ointment-researchers aren't getting as much money as they should. Despite unprecedented growth in federal research budgets over the past 10 years, complaints about a "funding crisis" in American research are picking up steam. Leon Rosenberg, dean of the Yale University School of Medicine, is one of the leading doomsayers. "Our nation's health research program is burning," he announced at an Institute of Medicine meeting last June. "For those of you in Washington who are unable to see the flames, I say, wake up and open your eyes. For those of you who can't smell the smoke, I say, please blow your nose and inhale again."

In fairness, Rosenberg's complaints can't be dismissed out of hand: Biomedical research costs have risen faster than inflation, and it is harder now than ever before for young scientists to get funding to start their own labs. But at heart, the idea of a research funding "crisis" borders on the fraudulent. American science is drawing more money than ever before from the federal government, and even the budget convulsions last year left research spending growing faster than just about any other segment of the federal budget. The National Institutes of Health (NIH), for instance-the primary source of biomedical funding, and the target of Rosenberg's campaign-has grown by an average of 8.1 percent a year for the last five years.

Larger increases in federal research spending are politically unlikely in the future. But even if increases were feasible, it's not clear they would be a good idea. Within scientific circles, horror stories of studies rendered useless by bad methodologies, improper uses of statistics, shoddy data, sloppiness, or fraud are legion. "There are so [many] bad statistics," says Gabriel Weinreich, an acoustical physicist at the University of Michigan. "It's not a minor violation-it's really rather horrifying."

"I've never met a scientist who didn't believe that 80 percent of the scientific literature was nonsense," says Walter Stewart, an NIH researcher with a longstanding interest in the integrity of scientific research. Richard Young agrees: "I frequently have to go into the deep literature'-those journals I no longer have time to read on a daily basis-and it is often a waste of time." Young adds that 80 percent of scientific articles could just vanish" without affecting the scientific enterprise.

Although hard evidence for such assertions is difficult to come by, there are a few indirect indications that such criticisms are, if anything, conservative. One way to measure the impact of someone's research is through "citation analysis," a process which basically amounts to counting the number of times a published study is footnoted in other scientific articles. Such analyses are far from foolproof: some scientists are unscrupulous and don't cite their colleagues' papers, while others can be influenced by a study they forgot to credit. Even with these caveats in mind, however, the results of a recent citation analysis are rather startling. The Institute for Scientific Information (ISI), a nonprofit organization in Philadelphia that keeps a large citation database, recently found that among papers published between 1981 and 1985 (in the hard and social sciences), just over half were not cited at all for five years after they were published.

That's a pretty stunning figure-but it gets worse. An earlier ISI study considered papers that were cited once or more, and found that only 46 percent of such papers were cited more than once. Combining the two figures (an admittedly imprecise calculation, but one that provides some sense of the problem's magnitude) leads to the conclusion that about 81 percent of all the published work in the sciences has next to no impact on the work of other scientists. And even that isn't the end of the story. ISI only indexes roughly 10 percent of all journals-this sample included about 4,500. "The conventional wisdom in the field is that 10 percent of the journals get 90 percent of the citations," says ISI analyst David Pendlebury. Extrapolating the preceding trends into the bottom tier of journals in another crude calculation suggests that perhaps only three scientific articles out of every hundred are worth reading at all.

Why do so many unremarkable articles-which usually reflect equally unremarkable academic studies-end up in print? The answer lies in three famous words: publish or perish. And the publish-orperish principle is firmly rooted in scientific competition for grants and positions-a process exacerbated acerbated by the practice of granting facility members lifetime appointments. Originally intended to protect the freedom of academics to study controversial topics, tenure has devolved into a seven-year review of junior faculty members that encourages useless publication in three ways. First, tenure candidates must convince their departmental peers that their research is up to the standards of the field, and the easiest way to do that is to amass a pile of impenetrable research articles, most of which are never read. "It wasn't like the ultimate outcome was scientific knowledge," said one researcher of her work at a Georgetown University medical school lab. "It was, like, just publish whatever you can to get more grants and more money." Or take the experience of University of Michigan President James Duderstadt. "As someone who has to read a couple of hundred casebooks a year for tenure decisions, I can say it varies significantly from discipline to discipline," Duderstadt says. "But it is clearly most out of control in the medical sciences, where if a person doesn't have over 100 publications listed in [his] biography you think there's something wrong with [him]." Unfortunately, Duderstadt isn't exaggerating.

Second, the tenure process creates a need to win the approval of academic peers and tends to reinforce conventional wisdom in a field, stifling innovative research. Surprisingly, for a community of supposedly open-minded scholars, academics generally prefer not to be challenged in their views. After reviewing the research on scholarly publication, University of Pennsylvania marketing professor J. Scott Armstrong observed several common themes that led him to devise an "author's formula." By this formula, authors wishing to increase the chances of getting their research published should l) not pick an interesting topic; 2) not challenge existing beliefs; 3) not obtain surprising results; 4) not use simple methods; 5) not provide full disclosure; and 6) not write clearly." Similarly, when Douglas Peters of the University of North Dakota and Steve Ceci of Cornell University found that psychology journals tended to accept articles based on the perceived status of the authors-regardless of the contributions of the paper itself-their study was rejected by the prestigious journals Science and American Psychologist before it found a home in a journal devoted to controversial topics.

Finally, according to David Helfand, chairman of the astronomy department at Columbia University, tenure "can exclude productive, energetic scholars from the system, maintain unproductive, unmotivated teachers in our universities, and discourage our best young minds from pursuing academic careers." Helfand argues that tenured job security attracts too many scholars seeking a respite from performance reviews and often "locks in" whole generations of academics-many of them untalented-during periodic "hiring frenzies" brought on by the retirement of faculty hired during the last such wave. The resulting mediocrity of the professoriate goes a long way toward explaining the generally dismal state of academic research. "We are selecting those with the greatest need for security, and the least confidence in their ability to hold a job on merit," he says. (Helfand, by the way, is virtually unique among academics for rejecting his own tenure offer nine years ago in favor of a five-year renewable contract [see "I Turned Down Tenure," June 19861.) Proliferate or perish

Less substandard work would make it out of the academy, of course, if there were fewer outlets within which to be published. There are more than. 30,000 journals in the hard sciences alone, and well over 100,000 journals for all fields. Among the publications indexed by ISI-and remember, this is only about 10 percent of everything in print-are 312 titles in psychology, 73 in sociology, 369 in mathematics, 126 in botany, 293 in literature, 61 in food and science technology, 12 in ergonomics, 55 in library science, and 18 in parasitology. The amount of repetition is mind-numbing: anesthesiologists can choose between Anesthesia and Analgesia, Anesthesiology, and Anesthesiology Clinics of North America; psychologists have the option of publishing in Psychosomatic Medicine, Psychosomatics, and Psychotherapy and Psychosomatics. And even ISI indexes such out-of-the-mainstream journals as the Annals of Saudi Medicine, the Ethiopian Medical Journal, and the Journal of the University of Kuwait-Jcience (whose publication is, presumably, temporarily suspended).

As a result, it's hardly surprising that a sufficiently determined researcher can get his study, no matter how flawed it may be, into print somewhere. In fact, in one of the famous frauds of science, a medical researcher named Elias Alsabti published nearly 60 of his colleagues' old articles in obscure journals under his own name. Nobody noticed for almost three years, and even then the fraud was discovered only because Alsabti asked a colleague to review a paper that still contained clear references to the real author's identity.

It's fair to argue that journal publishers are merely responding to demand. But there would hardly be so many journals if there wasn't money to support them-and that money comes almost wholly from the public. Many journals are put out by professional societies, like the American Chemical Society, which enjoy the advantages of nonprofit status. These societies fund their publishing operations through a combination of advertising and membership dues-the latter paid largely out of government research grants. But most journals-85 percent, by some estimates--are distributed by commercial firms that usually make a healthy profit off government largess.

In addition to whatever advertising revenue they can scrape up, commercial journals thrive on subscription rates nearly twice those of nonprofit journals, mostly paid by university libraries whose operating costs are largely covered by the government. As if that weren't enough, many commercial journals resemble vanity presses by making researchers pay for the privilege of seeing their articles in print. The highly regarded commercial biology journal Cell, for instance, charges researchers $15 a page to publish papers which often run 15 or 20 pages. Of course, page charges are usually paid out of government grants, too.

Some academics claim that the proliferation of journals and studies is merely evidence of a healthy scholastic enterprise. But, spurred on by all the bogus publishing opportunities, academics have become so specialized that they can sometimes barely talk to their counterparts, much less students or their colleagues in other disciplines. "We have created a faculty of scholars frequently so narrow in their studies and specialized in their scholarship that they are simply incapable of teaching introductory courses," William Shaefer, a professor of English at UCLA, told U.S. News and World Report. James Trefil and Robert Hazen, two George Mason University science professors and the authors of the new book Science Matters: Achieving Scientific Literacy, recently found that in a group of 24 physicists and geologists, only three could explain the difference between DNA and RNA-A fundamental piece of information in the life sciences. And those three did research in areas where the information is a professional necessity.

The situation has gotten so bad that even the normally lethargic academic establishment is waking up. Some universities, led by the Harvard Medical School, are beginning to limit the number of papers they'll accept from faculty members up for promotion or tenure, as is the National Science Foundation. The idea is to discourage "salami science," the practice of breaking research into the "lowest publishable units." Some institutions--particularly state universities, where "publication is seen as the road to respectability," in the words of Harvard education professor Vito Perrone-are likely to resist. "More often these days, publication is not regarded as a way of communicating knowledge, but of promoting faculty," says Armstrong.

Sturgeon's law

Although there's a significant amount of money wasted on useless research, the financial cost is only part of the reason to be concerned. It's worth remembering that every hour faculty members spend conducting experiments, taking surveys, or "deconstructing" Jane Eyre is an hour they're not spending with students. Education has become a bottom-drawer priority for most academics. "You simply don't get rewarded for teaching students," says Robert Collins, an English professor at Florida Atlantic University. "Spending time with students is suicide in this competitive atmosphere." Surveys reveal that the average number of hours faculty spend in the classroom each week has fallen from 10.5 in 1980 to only 8.7 in 1989. By contrast, faculty at research and doctorate-granting institutions now spend an average of 19.3 hours a week on research-related activities.

This suggests that anything that shrinks the amount of time and energy devoted to marginal research would have to be an improvement. The research establishment, of course, is resistant to the very idea. When I asked him how much academic research activity is actually useful, the Association of

American Universities's Robert Rosenzweig shied away from the question. "Any guess I could make would surely underestimate its value," he says. While it's indisputable that "some research results are less interesting than others," it's "difficult to tell in advance which bets will turn out successful, and which won't; I can't think of a better way to operate the system than by having the people involved judge what's the best line of inquiry. No one's come up with a better system yet."

Maybe not. But Rosenzweig's answer might be called the Sturgeon's Law defense, in honor of the maxim--credited to the late writer Theodore Sturgeon-that "90 percent of anything is crap." In other words, since we can't figure out in advance who will have the bright ideas, we'd better fund everybody. The price-nine mediocre researchers for every brilliant scholar-is one society had better just shoulder. This argument suits legions of mediocre academic researchers just fine. But it's not exactly a guide for a defensible public policy.

Presumably, Rosenzweig thinks the alternative to the current system would be some draconian setup whereby a panel of bureaucrats attempts to judge whether research into the immune system of rats is more worthy of federal money than high-energy physics. There are a number of enthusiasts for just this approach; Drexel University science historian David Noble suggests that democratic control of research funding would go a long way toward asserting the needs of society over the often misplaced priorities of scientists and other academics. But political control of science has an unhappy history, one filled with horrors such as T. D. Lysenko's fraudulent crop genetics in the Soviet Union and the barbaric medical experimentation of Josef Mengele in Nazi Germany. Instead, it might be useful to consider ways of improving the quality of research by eliminating the perverse incentives that skew the present system.

The federal government is in a unique position to pressure universities to change, thanks to the leverage provided by the research grants it parcels out to institutions. It's not widely known outside the research community, but universities actually collect a fair amount of money from these grants through indirect "overhead costs" ostensibly related to utilities and upkeep of research facilities. At Stanford, which has the highest indirect cost rate in the country, a $100,000 research grant actually costs the government 178,000-$100,000 to cover the researcher's salary, his graduate students, and equipment, and $78,000 for the university's purposes. (Stanford is also currently under investigation for inflating its reported costs-less an argument for cutting back such reimbursements across the board than for letting federal auditors keep a closer eye on the money.) As a result, universities have every incentive to comply with whatever strings the government decides to attach to their money. The success of this approach has already been demonstrated-NIH, for instance, promulgated a set of guidelines for research practices and the handling of misconduct allegations.

Class consciousness

One obvious reform would be to have all research funding agencies-NIH, the Department of Defense, NASA, the Department of Agriculture, and the Department of Energy-follow the National Science Foundation's lead and limit the number of papers applicants can include with their grant applications. But the agencies can go one step further, and require institutions that take federal money to do likewise in their promotion and tenure decisions. By removing the incentive to publish as many papers as possible in the traditional seven-year race for tenure, tenure committees would encourage professors to concentrate on producing only the best papers possible.

Similarly, it wouldn't hurt for funding agencies to be more aggressive about the extent to which they reimburse library costs-a strategy that could help indirectly cut back on the number of journals in circulation. After all, who needs 138 journals of economics? Right now, only those economists seeking a home for the article no one else will print. By cutting back on the number of journals carried by libraries and subscribed to by researchers, the government can stop subsidizing the useless sectors of the academic publishing industry.

These simple measures could go a long way toward curbing the more flagrant failings of the current research system. In the long run, however, more systemic change is needed. Redirecting the energies of the academy away from self-promoting activity and back toward its primary goals-educating and advancing the state of knowledge-will require fundamental changes in academic culture. And the initiative for such change will have to come from the universities themselves.

Would-be reformers could do worse than to look into a recent report by Ernest Boyer of the Carnegie Foundation for the Advancement of Teaching. Entitled Scholarship Reconsidered: Priorities of the Professoriate, it begins with the startling fact that 60 percent of today's faculty believe that promotion should be based primarily on teaching ability rather than research prowess. At research universities, a smaller-but still significant-21 percent of faculty agree that current priorities are scrambled. Everett Ladd, a professor at the University of Connecticut, told Boyer that the emphasis on research and publication is "seriously out of touch with what faculty actually do and want to do."

In order to allow professors to bloom as teachers, Boyer makes a series of simple, yet sensible, recommendations. Universities should consider a broad range of writing in evaluating their faculty--everything from traditional research articles to textbooks and popular writing, such as the books and magazine articles of Stephen Jay Gould. They could also set guidelines for evaluating other scholarly contributions, such as the development of educational software or audiovisual materials. The control academic departments hold over promotions would necessarily be diminished, since Boyer also quite sensibly recommends rewarding interdisciplinary scholarship. Three tiers of teaching evaluations-self-assessment, student evaluations, and peer reviews-should also be incorporated into promotion decisions. (Despite lip service to the importance of teaching, few academic departments these days really consider teaching ability to be on the same plane as grant-getting and publishing. "It's true that a person with a good teaching record and a mediocre to poor research record will not be promoted [at MIT]," says Gene Brown, MIT's dean of science. But Brown adds that a mediocre teacher with an outstanding research record probably would get tenure.)

If universities really want to do something about improving the quality of research, they had better pay attention to suggestions like Boyer's. The crisis in academic research is real, even if it's not the one that critics such as Yale's Leon Rosenberg like to complain about. Spending more money on research isn't the answer. Spending it smarter is.
COPYRIGHT 1991 Washington Monthly Company
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Hamilton, David P.
Publication:Washington Monthly
Date:Mar 1, 1991
Previous Article:Kicking the oil habit. How to get alternative fuel cars on the road. Lots of them. Right now.
Next Article:The Senate's lame doves; why they failed to stop the war.

Related Articles
The Schools We Need and Why We Don't Have Them.
No S.I. of relief....
Low Risk, High Reward.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters