Printer Friendly

Think "results," not "evaluation": before learning the "nuts and bolts" of how to do evaluation, nonprofit professionals must make a shift from viewing it as a negative to viewing it as a valuable aspect of organizational effectiveness.

When I say "evaluation" to nonprofit professionals, I often see cringes and scowls. Some think back to statistics classes and number crunching. Others speak of the challenges of not enough time, money, or capacity to do evaluation. All are wary about how evaluation findings will be used to make judgments about their programs. They think funders only want to hear good news. Consequently, nonprofits may not share the real lessons because what they learned is not positive. Instead, they tell funders what they think they want to know. So, what we have are a lot of evaluations, often poorly planned, and little commitment to using the findings to improve program effectiveness.

Two factors are forcing nonprofit organizations to reconsider the use of evaluations. The expectation for nonprofits to demonstrate accountability by achieving program goals and increased competition for limited funding opportunities are creating the demand for nonprofits to "measure outcomes." Organizations with data and information showing they are accomplishing what they intended will win the battles for new or continued funding.

This is a tough challenge for many nonprofit professionals. They know why they must learn how to evaluate and measure outcomes. There are many good resources available to learn outcome and evaluation "nuts and bolts." The Kellogg Foundation's Logic Model Development Guide ( gives a solid introduction and templates for program planning and evaluation. The University of Wisconsin Cooperative Extension's Program and Development Web site ( has helpful interactive online evaluation courses. Innovation Network's ( Workstation requires registration to use the site at no cost. By registering, users gain access to tools that will help organizations identify outcomes and create evaluation plans.

But, before embarking on the task of learning, my advice is for nonprofit professionals to shift their negative views about evaluation to a view that embraces it. My approach for making this shift is fairly simple. Before focusing on the "nuts and bolts," I emphasize how the evaluation process and similar activities will contribute to the organization's success.

Continuous Learning to Increase Organizational Effectiveness

I have learned that an important part of getting nonprofit professionals to see the benefits of evaluation and measuring outcomes is showing how an ongoing process of formal assessment will help an entire organization become more effective. This process, also known as continuous learning (see Figure 1), incorporates feedback from evaluation, outcome measurement, and other assessments into discussions of how to improve existing programs, as well as develop new programs. My goal is to get them to see how they can use data, analysis, and conclusions to make informed decisions to run their programs. The benefits include successful programs, being able to demonstrate accountability, making decisions to efficiently allocate resources, recruiting and retaining qualified staff and volunteers, developing better budgets, and an increased ability to receive funding from a variety of sources (United Way of America, 2000). When nonprofit leaders see this connection, they make the shift. At the start of a long-term evaluation project, one of the participants felt the process would be a waste of time because previous evaluations never had any value. By the end, however, he was an avid evaluation advocate. His organization was able to use the evaluation findings to build the organization's capacity and leverage additional funding.


Data Collection and Analysis

Measuring outcomes, undertaking evaluation--most questions focus on how they are different. I emphasize how they are related; you can't conduct an evaluation without outcomes.

Outcome measurement is the regular, systematic tracking of the extent to which program participants experience the benefits or changes intended (United Way Outcome Measurement Resource Network, 2005).

Evaluation is the systematic collection of information about a program in order to enable stakeholders to make judgments about the program, to improve program effectiveness, and to make decisions about future programming (Patton, p. 23).

Both are part of a process I call results measurement planning that focuses on the different ways organizations can assess progress toward achieving a program's intended goals. Therefore, the decision is not to evaluate or to measure outcomes. Instead, it is on how both will be used to determine whether and why a program is getting the results the organization said it would get.

Be Realistic About Outcomes

A key barrier to achieving results is the pursuit of unrealistic outcomes. My experience is that many organizations do not make a sufficient investment of time and resources to develop useful program outcomes. Ideally, the identification of outcomes should be a process driven by previous program assessments, prevailing theories, and knowledge of the community or clients served, balanced with the organization's current human and financial resources. This approach to setting outcomes requires time, a resource frequently in short supply at busy nonprofit organizations. Instead, programs are "created" to meet funding source guidelines. The grant-writing department, executive director, or others complete grant applications without consulting with program managers. Goals and outcomes are established in the last-minute dash to meet a deadline. At the end is often a set of outcomes that is unattainable, setting the program up for failure because it cannot produce the expected results.

Regardless of the amount of time available, a key element is acknowledging that outcomes occur on a continuum of change that moves from the short term to the long term (see Table 1). Many nonprofit organizations err in setting medium- and long-term outcomes, but given the length of their programs, they should only expect short-term outcomes. Their target should be realistic outcomes that reflect the length of time, funding, and activities they have available to work with the population they want to help.

When Possible, Include Others

Identifying outcomes and creating evaluation plans should be a collaborative process. There are activities that require external evaluators to work independently to collect data and draw conclusions. But, most organizations can engage staff, clients, partners, funders, and other stakeholders. Every stakeholder expects an outcome. Sometimes these expectations are not discussed until the evaluation begins, revealing major differences in opinion. In an evaluation of a project sponsored by three organizations, our evaluation team learned quickly that the partners never discussed what they expected the project to achieve. They successfully funded a project that built upon each organization's core strengths and could agree on what type of activities and products they wanted to create. However, they never had a conversation about the expected outcomes of their joint efforts.

Input from stakeholders is essential for quality outcomes and evaluations. Their experiences and expertise can raise issues that should be addressed when developing outcomes. Their input ensures that relevant information is collected to measure outcomes and complete evaluations. In addition, collaboration supports a cycle of continuous learning because stakeholders are encouraged to use feedback from results measurement to guide their work.

Integrate Results Measurement into Program Planning

It is unfortunate that most evaluation planning occurs toward the end of a funding cycle. This adds to the perception that conducting evaluations is an extra burden for the organization. This perception is accurate because by waiting until the end to plan for an evaluation, the staff must now find time to create evaluation questions, review documents, find program participants to interview, analyze data, and write a report. This burden diminishes when evaluation is integrated into the process of program development and review. If outcomes and evaluation questions are identified when a program begins, it is easier to create useful data collection tools that can be modified throughout the life of the program. There should be a priority on reviewing reporting requirements to see how they can be used to support current evaluation needs. What staff members resent most is an onerous process that asks them to fill out too many forms. When evaluation activities are incorporated into the normal work day, employees understand why information is being collected and how it will be used.

Tell Your Story, Communicate Your Findings

Evaluation helps an organization share its successes and failures. I encourage organizations to develop a dissemination plan that will help them create and take advantage of opportunities to share conclusions from their results management activities. This information should not be left in the form of a final report. Instead, the report content should be "cut and pasted" to inform different audiences about the findings they will find most interesting. Every stakeholder is a different audience. They should review the array of written materials produced in their organizations. Results information can be integrated into newsletters, press releases, Web sites, mailings, solicitations, grant proposals, brochures, and annual reports. Don't rely upon written materials. Consider presentations at workshops and conferences. If an organization is included in an evaluation with other organizations, ask whether each can have a separate report summarizing its relevant findings. To encourage participation in a multiyear evaluation of two pilot projects, I was asked to prepare shorter and separate evaluation reports for each project. Though everyone was interested in the overall success of the initiative, the executive directors for the pilot organizations saw how they could use the evaluation findings to support their program.
Table 1. A Continuum of Outcomes: What Changes Can You Expect Over Time
(Innovation Network, 2004)

Short Term (occur immediately Medium Term (within Long Term (after
or within several months) the first year) the first year)

Awareness Behavior Social
Knowledge Practice Economic
Attitudes Decisions Civic
Skills Policies Environmental
Opinions Social Action


Innovation Network. "Evaluation in Nonprofit Organizations." Presentation by Monica Heuer and Veena Pankaj. April 2004.

Patton, Michael Quinn. Utilization-Focused Evaluation. 3rd ed. (SAGE Publications, 1996).

United Way of America. Agency Experiences with Outcome Measurement: Survey Findings 2000,

______. United Way Outcome Measurement Resource Network, Frequently Asked Questions, 2005

Dr. Margo Bailey is an Assistant Professor at American University's (Washington, DC) Department of Public Administration and an evaluation consultant for nonprofit organizations. The author would like to thank her colleagues at Innovation Network for allowing her to use their materials in this article. She can be reached at
COPYRIGHT 2005 Bureaucrat, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bailey, Margo
Publication:The Public Manager
Geographic Code:1USA
Date:Mar 22, 2005
Previous Article:Synergy in the nonprofit sector: highlights of a recent conference in Fairfax County, Virginia, on the good things that can happen when practitioners...
Next Article:Prospecting for gold: finding financial resources for your organization; Securing financial resources is central to the nonprofit world, and this...

Related Articles
Trends Affecting Nonprofit Camps.
How Effective is your board? Consider a simple self-assessment that is both constructive and insightful. (Board Primer).
Reflection and resolve: New Year's resolutions for nonprofits. (Effective Grantmaking).
A representative sampling: highlights of the latest research being conducted on the nonprofit sector.
Synergy in the nonprofit sector: highlights of a recent conference in Fairfax County, Virginia, on the good things that can happen when practitioners...
Accurate analysis: cost ratios and funding nonprofit infrastructure.
The research practices and needs of non-profit organizations in an urban center.
Organizational evaluation: funders want to know that you know yourself.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters