By: Shelly Schnupp
Challenging times often call for difficult decisions. The Covid-19 pandemic has presented significant challenges to individuals as well as organizations–including nonprofits. Long interruptions in service delivery, shifting client needs and loss of revenue streams are forcing some nonprofits to face uncomfortable decisions about service delivery—some that have been avoided for years or others that have surfaced more recently. Program evaluation can be especially helpful in preparing nonprofits to make difficult choices about programming. Unfortunately, evidence indicates that, despite numerous initiatives, national and local, evaluation continues to be an area of struggle for many nonprofits.
A recent study conducted by Project Pivot and UW Milwaukee[1] identified program evaluation as an important challenge facing Milwaukee nonprofits. While many reported engaging in some type of program evaluation, 80% of respondents indicated that program evaluation was challenging with some of those indicating they did not engage in it at all. Innovation Network’s national study of nonprofit evaluation practices[2] identified barriers to evaluation, with (no surprise) staff time, financial resources to support evaluation and staff skills continuing as the top barriers. The 2016 study also revealed a more disturbing finding: “Having staff who did not believe in the importance of evaluation” was reported as a barrier by more than twice as many nonprofits in 2016 (25%) than in 2012 (12%).
Create a Learning Culture
At Spectrum Nonprofit Services, we recognize the role of evaluation in our strategy model. According to my colleague Steve Zimmerman: “To be successful, an organization’s strategic direction needs to recognize and hold all the elements – community influences, revenue generation and intended impact – together, while also providing the flexibility to adjust course as leaders learn from implementation.” Learning comes from evaluation, some formal, some informal, and especially evaluation of program outcomes on an ongoing basis. You can’t learn unless you evaluate.
In my experience with nonprofits the idea of evaluation continues to intimidate nonprofits. Outside critics rendering judgments about work already completed or recommending random assignment and control groups are rarely welcome. Even taking on the responsibility for evaluation internally raises concerns about staff time and capacity, another barrier identified in the Innovation Network report. And funder expectations of evidence can be unrealistic—can one program be expected to reduce chronic homelessness? By making evaluation less formidable, and more useful to nonprofits, might we move the needle on belief, and hopefully engagement, in evaluation back in the right direction? Some promising developments may help.
Keep Measurement Reasonable
While terms continue to be used interchangeably, it seems that more are making the distinction between impact and outcome evaluation (although those distinctions are not always consistent): more see impact evaluation as measuring longer term, often systemic community-level change and outcome evaluation as measuring shorter term changes experienced by individuals that are necessary for systemic change. For most nonprofit programs, measuring outcomes is as reasonable as it gets. Matthew Forti, in Seven Deadly Sins of Impact Evaluation, points out that nonprofits rarely have the resources required to prove (or disprove) impact, but they can collect information that provides “significant insights about how well an organization’s programs are working and how they can be improved.” Similarly, the Gates Foundation, in discussions of “actional measurement[3]” notes that it is reasonable to measure program outcomes for their contribution to impact rather than their attribution to impact. I’ve long appreciated the view that impact is what we hope for, but outcomes are what we work for (and I wish I knew who to credit!)
Learn from Others
Most nonprofits will never engage in impact evaluation which Forti and others view as involving a third-party evaluator to validate the attribution of programs (including collaborative initiatives). But nonprofits can certainly learn and borrow from those whose services, outcomes and impact have been validated through evaluation research methods. One of the more interesting (and useful) approaches to logic models that demonstrates this thinking is referenced in the NH Center for Nonprofits “conversation with Peter York[4],” a national spokesperson for impact measurement issues and organizational learning. In addition to the typical logic model elements–program resources – program elements (activities) – program outcomes and community impact, the model inserts a “Studies Show” section—a brief description of the evidence basis that describes how the program outcome has been determined to contribute to community impact.
I would suggest that this form of logic model could join other tools to help nonprofits: understand their contributions to impact, increase their comfort level in measuring what is meaningful and relevant to their scope of services, hopefully on a regular basis; learn what works (and what does not work) and modify program offerings accordingly; and understand and communicate their contributions better.
Moving toward nonprofit sustainability, whether times are stable or volatile, requires making ongoing strategic decisions that account for both mission impact and financial viability. Evaluation can be used by nonprofit organizations to inform decisions about programming’s contributions to the mission impact part of the sustainability equation. Efforts to make evaluation feasible and useful can help nonprofits prepare for and navigate challenging times—now and in the future.
- Understanding and Supporting Milwaukee Nonprofits, a study of the needs of nonprofits in Milwaukee. Project Pivot, 2010. https://www.pivotwi.org/news/2020/8/26/study-complete-results-amp-recommendations
- State of Evaluation 2016, Innovation Network, https://www.innonet.org/media/2016-State_of_Evaluation.pdf
- A Guide to Actionable Measurement, The Bill and Melinda Gates Foundation, 2010. https://docs.gatesfoundation.org/Documents/guide-to-actionable-measurement.pdf
- Outcome Measurement: Thinking differently about outcome measurement, A Conversation with Peter York, Nonprofit Notes, New Hampshire Center for Nonprofits, Fall 2013. https://www.nonprofitnext.nhnonprofits.org/sites/default/files/resource_library/Thinking_Differently_About_Outcome_Measurement.pdf
Header Photo by Dan Dimmock on Unsplash