As individuals and as parts of organizations, most of us are natural advocates—for our communities, for our values, for our children. But if we want a cleaner environment for our community, more funding for our nonprofit organizations, or better schools for our kids, we had better be able to make the case that more funding or attention is likely to yield better results. Making that case requires evaluation—of what we have done in the past and of what we may be able to do in the future.
Based on my years of working with organizations to design better evaluations, here are a few initial suggestions for designing and implementing more effective evaluations.
- Focus on outcomes, not only on outputs. Outputs are the goods and services we provide; outcomes are the results of what we provide. A teacher provides information and support (outputs), which hopefully results in greater knowledge and increased capacity to succeed (outcomes).
Think carefully about coming up with a small set (2-3) of outcomes that reflect the results—actual or potential—of what you do. Funders, policy-makers, and potential volunteers are more likely to help if they understand not only what you do but what difference it will make.
- Think about a sequence of outcomes. Your results are not isolated measures; they are part of a process of intended improvement. It is often helpful to create a causal sequence of results, starting from your activities and leading toward an ultimate desired goal.
For example, a social service agency helping homeless individuals might have an initial outcome of increased self-confidence, leading to increased capacity to look for work, leading to applying for apartments or jobs, and then resulting in obtaining and keeping stable housing for a period of at least three months. Each of these stages can be measured with a separate outcome indicator. Having a sequenced set of outcomes allows you to take credit for realistic, more humble results that you might otherwise overlook.
- Tie outcomes to activities. This is one of the most commonly ignored guidelines. Try to identify your desired results by looking realistically at what you actually are doing, and what you can expect to change as a result.
For example, if you are mentoring students one hour a week on their reading skills, you might expect to identify some improved attitudes toward learning or some increased specific reading skills, but you are not likely to have much of a measurable impact on family stability or educational community graduation rates. The more closely you can connect your specific outcomes to your specific actions, the more credible the evaluation will be, and the less likely you will be to create unrealistic expectations.
- Design an evaluation that is feasible for you to complete. Don't set unrealistic goals for how sophisticated your evaluation will be. In most cases, you aren't trying to conduct research that will be published in refereed academic journals; the standards they use for "convincing" evidence are generally unreasonable for most individuals and organizations to meet. Instead, think about the question: "How can I make a plausible case to my key audiences that what I am doing makes a difference?"
For some audiences, a couple of carefully selected outcome measures, with a pre- and post-test score, might be sufficient; for others, a couple of well presented narratives of people who have benefited from what you've done might be enough. Think about your capacity to design and implement an evaluation. It's better to bite off a small piece and carry it through, than to attempt a more ambitious project and have to abandon it.
- Collaborate and compare. Talk to your peers in similar organizations or communities, and share ideas and experiences. Think about sharing the workload for conducting a combined evaluation of different programs or activities.Also, evaluations are stronger if they have a comparative component—how do your results compare to those of other programs? If others have gathered information, seek it out and use it as a benchmark or a starting point for your own evaluation. If you are thinking of giving a survey to some clients, see if other organizations have already developed survey questions. The internet is a great source for finding other organizations and evaluations that can help you.
There's a lot more to say about all of this, and I'd be glad to talk more about improving your evaluation capacity. But I hope this has provided some initial thoughts on how to proceed. Good luck!