Using research: Being a responsible consumer. (Notes From NCNE).
Research has a definite place in the management of nonprofit organizations. Information on economic trends or the budgets and activities of organizations can help you make decisions about how to position yourself vis-a-vis your clients, competitors, and funders.
Data on your members, clients, or audiences can help you make strategic choices about the programs you offer. Foundations and policy-makers use research to guide their funding and policy decisions.
There is a growing contingent of nonprofit sector researchers. Part of their job is to stay abreast of the work of their colleagues. Like many nonprofit managers, however, nonprofit researchers are often tempted to skim over details and read the abstracts, brochures, and headlines that pass for research findings these days.
Sometimes, that's all that full-time research professionals have time to do. In many cases, reading the summaries and taglines is not enough. Brochures, newspaper stories, and funder reports are usually not interested in the details of how a study was conducted. They are typically interested in reporting soundbite results that busy managers can absorb quickly as they move from one piece of mail to another.
However, one can't always trust the soundbites.
If one headline reports that "corporate contributions to nonprofits are rising" and another reports that "corporate contributions to nonprofits are falling," which one should be believed?
Most research reports include a section that gives all the details of how the study was conducted. If readers want to determine how two studies could come up with opposite conclusions, the answer usually lies in these details.
Whether or not one takes the time to assess the difference in details and pass judgment on whether to trust one or the other report is a question of whether one is a responsible consumer of research. A responsible consumer is one who assesses the validity of a research project before deciding whether or not to trust its conclusions.
Ironically, more carefully done research often results in more guarded and less flowery conclusions than research that is less carefully done. Many people who do research on nonprofit organizations (and a great variety of other social creations) fancy themselves to be scientists.
As scientists, they use scientific methods in their work and adopt much of the philosophy about how our knowledge about the world is generated. For example, social scientists believe in the replicability of research projects, so they describe their methods in great detail so that someone else (if they so chose) could conduct the research again to see if they could get the same results.
Another scientific principal, and the one that is often overlooked by everyday consumers of research, is skepticism. No matter how careful they are when they conduct their work, social scientists are supposed to be skeptical of their findings. One study never "proves" anything.
Understanding of the world is built by conducting a lot of different research projects, or maybe even doing the same one several different times. If the research is carefully done, the results make sense, and other studies corroborate the conclusions, we increasingly build trust in our findings.
These scientific principles are hard to maintain in a world that rewards quick results and unqualified soundbites. The trend in social policy research is to conduct one large scale study or one experimental evaluation, infer as many grand results as possible, and make policy decisions based on those results.
While most researchers are hesitant to draw too many broad claims from their surveys and experiments, funders, journalists, and policy makers prod them to "go beyond their data" to create "actionable" conclusions. Whenever nonprofits researchers see big claims, that's always a prompt for them to put on their "responsible consumer" hat and dig into the details of the study. Their first question is often, "How did they determine that?"
A case in point
"Difficult Road Seen for Midsize Arts Groups," trumpeted The New York Times. "Midsize Performing-Arts Organizations Face Financial Threats," read the headline in a nonprofit trade publication.
If you happen to keep up with the field of research on nonprofit arts organizations, you have probably either read or heard about a recent report released by RAND. This much-anticipated research report was funded by the Pew Charitable Trusts as part of its five-year initiative called Optimizing America's Cultural Resources. The Pew Trusts is a prominent funder of arts and culture in the United States. Although RAND is relatively new to the arts research field, its reputation as a high-quality research organization is well-known and deserved. The study is a major contribution to the field of research on the performing arts.
Since the report is authored by responsible social scientists, it is appropriately careful about the research claims. It is self-conscious about the fact that it relies on secondary data cobbled together from a variety of sources.
Nonetheless, in an apparent effort to make "actionable" claims, the report's authors speculate about the near-future based on various trends or hunches about how organizations behave. For example, the report noted that mid-sized arts organizations rely less on volunteers, so they might have a harder time downsizing their staff during hard fiscal times.
Finally, in the concluding chapter, the report goes so far as to say that mid-sized. organizations face challenges that will cause them to either adopt professional standards like large. organizations, focus on niche markets like small organizations or wither away and die.
The claim is not much more prominent than any of the other claims in the report, and the RAND scientists responsibly note the limitations of their data, qualify their conclusions as a "potential" decline of the middle tier, and even conclude that their concerns about the future of medium sized arts organizations "may be overblown."
These conclusions are a far cry from the headlines and soundbites. Early buzz about the report characterized it as a harbinger of doom for mid-sized arts organizations. The imminent demise of the nation's mid-sized arts organizations became the leading paragraph in project press releases, and major media outlets focused their attention on the report's rather tenuous conclusions about the future of this breed of performing arts organization.
And, why shouldn't the media believe the press releases? After all, RAND and the Pew Trusts wouldn't say it if wasn't true. And, after all, The New York Times and the trade publication know how to dissect research reports. Right?
Not long after the report came out, a major foundation held a meeting to consider its position on whether or not to extend its traditional funding of large arts organizations to mid-sized arts organizations. Said a foundation rep, "RAND says that mid-sized arts organizations are all going to wither and die, so why should we throw good money after bad?" The social scientists in the room thought immediately about skepticism and responsible consumption of research claims, but no one said anything.
However, after the meeting, the researchers encouraged the foundation to look at the report again to see how much stock the RAND scientists put in their own claims. They found that the RAND results are not based on a recent decline in mid-sized arts organizations. Rather, they are based on perceived shifts in the environment and observed characteristics of mid-sized groups that the authors hypothesize will conspire against the middle tier.
This is enough of a leap that the researchers themselves acknowledge that the future may certainly unfold in other ways than the one they have imagined. The foundation representative then observed that the events of September 11 will have an unknown influence on the future of mid-sized arts organizations - a factor that the RAND report could not foresee.
In addition to carefully reading the method and caveats of a research report, researchers also ask, "What kind of information is available to confirm or dispute these conclusions?"
The National Center for Charitable Statistics (NCCS) in Washington, D.C., maintains and distributes annual databases of Form 990s. NCCS put together a file of 7,266 arts organizations from the early part of the 1990s and observed how many of them had disappeared by the later part of the decade.
Most research on the closure of organizations finds that small organizations are much more likely to close than medium-sized and larger ones, but the NCCS researchers wanted to specifically investigate whether the mid-sized organizations had experienced particular problems during the 1990s. They explored the size question a couple different ways, but could not confirm that mid-sized arts organizations have been at any larger risk than large arts organizations. They could only confirm that mid-sized arts organizations fare better than small arts organizations.
Using the Form 990 data, the NCCS researchers also explored whether mid-sized arts organizations differ substantially from other arts organizations on some standard measures of financial vulnerability. Mid-sized arts organizations are not any less diversified in their income sources, on average, than large organizations, while both size categories are more diversified than the average small arts organization. Mid-sized arts organizations have higher administrative costs than both small and larger arts organizations, which is consistent with RAND's assessment that mid-sized arts organizations are less reliant on volunteers.
However, the academic literature indicates that higher administrative costs are associated with a lower risk of organizational failure since organizations with high overhead can cut back on administration (rather than programs) during hard times. Given these assessments, one could conclude that mid-sized arts organizations are well positioned to take on the future.
The most appropriate conclusion is that nonprofits researchers do not yet know enough to make solid predictions about the future. Although the phrase "this question requires future study" sounds like a cop-out, it is probably true in this case.
The larger point, of course, is that the responsible consumer of research always scrutinizes big claims. Always ask, "How did they come up with that?" Not everybody can go to the trouble that NCCS did to consult an independent data source to see how someone's research claims stack up.
However, everybody can put on their skeptics hat, track down the report, and look under the hood.
Elizabeth Boris is director of the Center On Nonprofits and Philanthropy at the Urban Institute in Washington, D. C. Mark Hager is a research associate with the organization. The National Center on Nonprofit Enterprise, which provides this column, can be reached at www.nationalcne.org or by email at firstname.lastname@example.org.