Evaluate. And Disseminate.

Demonstrating good work is how nonprofits gain support. Helping our communities understand how our good work makes long-term change over time is the next step. Carolyn Fiennes suggests that most nonprofits ought to be gathering information on their performance in her new post published by Standford Social Innovation Review. Fiennes goes on to suggest that evaluation of the idea behind their work (rather than simply monitoring outputs) is often best done by outside professionals and not the nonprofits themselves.

Monitoring performance data is an absolute must for any charity that is serious about showing real impact in the world. Some nonprofits may even find a community of support which doesn’t feel the need for evaluation of the idea behind the work. Basic human needs organizations may not need to show why providing a hot meal to people in need is changing the world; they may be able to get support because providing a hot meal is just the right thing to do.

Many of the rest of us in the social sector are in the position of bringing our supporters along not just on our performance data (we delivered 200 meals today) but rather on our long-term outcomes (the people we fed were able to start looking for work, so they could meet their own needs in the future, and 10 of them did find employment). Fiennes is right that creating a complex theory of change is easier with social scientists from prestigious universities at your disposal, but that shouldn’t stop a nonprofit bent on changing the world from taking a good run at it and correcting as they go. We should help our communities understand why we believe that a hot meal one day allows for a job search the next day, and a job search the next day leads to better outcomes the day after that. Fiennes clearly wants this evaluation to happen, but there is more than one path to that door.

Nonprofits can gather data that not only monitors performance, but also reshapes the ideas we bring to our business of improving lives. If we make a compelling case that our ideas are better than someone else’s, it is very possible some researchers from the outside can help us refine this work, whether hired directly to do this evaluation or because it is incidentally in their area of study. Those of us in the field may already have a great hypothesis for how change in the world can come to be, so let’s evaluate our work as well as monitor it. Then let’s share that idea far and wide, and encourage a discussion of where it can be improved or whether we’ve just run completely off the rails. Let’s engage our supporters (both financial and otherwise) in an open conversation about why this idea is working while allowing for the possibility that some other charity somewhere else has actually invented a better mousetrap.

Yes, a well-structured evaluation plan is the goal, absolutely. Waiting for perfect evaluation – even waiting for really, really good evaluation – may slow us from trying perfectly good ideas. Some of those good ideas may, in fact, turn out to be great ones.

Originally published at Community Tech Knowledge. Copyright by Community Tech Knowledge, republished here with permission. All rights reserved.