Once again, I am sucked into the discussion of the one right way to evaluate nonprofit organizations. Every few years another bright shiny silver bullet is paraded around the fairgrounds for all to see with the promise that we will, at long last!, have the answer to the thorny issue of whether and how causes make a difference. The Wall Street Journal, much to my surprise, has a lucid column on the limitations of financial data as used by groups like Charity Navigator to determine “good” from ‘bad” nonprofits. The presumption of these efforts is that there is one right way to judge how much overhead is needed or how much programs should cost. Naturally, Robert Egger, the founder and president of DC Central Kitchen and a constant voice of reason and sanity, nails the problem on the head with this quote, “The low-administrative-overhead standard is an intellectual albatross around our necks.”
Into the breach where many have come before is Social Solutions and its Social Investment Ratings Tool. As described in the Chronicle, Social Solutions has gathered up a group of luminaries, including Egger, as well as Diana Aviv of Independent Sector, Brian Gallagher of United Way of America, and Paul Brest of the Hewlett Foundation among others to vet the tool. The goal is to provide information and comfort to donors that their money is being well spent. Again, according to the Chronicle article, “A majority of wealthy individuals (58 percent) said they would give more if they could determine the impact of their donations, according to a 2006 survey by the Center on Philanthropy at Indiana University.” What we need, says this group, is a common process for measuring results.
Unfortunately the link to the actual tool from the Chronicle article isn’t working, so I don’t have all of the information I’d like to make an assessment of this effort. However, it isn’t going to stop me from having an opinion and making the following observations:
1. There is a fundamental difference between nonprofits learning about their efforts first and foremost to improve them and then to communicate with donors, as opposed to the other way around. This effort seems almost entirely driven by the need to satisfy donors, which is both disempowering for nonprofits and leads to disingenous efforts.
2. The answer isn’t a new set of measures; it is a new process of transparency and learning. It is unsettling to see a national effort that includes so many groups that have been heavy handed in their approach to evaluation in the past leading this effort. Learning doesn’t come from new requirements or forms; it comes from a natural desire to improve. Nonprofits can either numbly fill out forms to make the grade and satisfy donors. Or they can become fully transparent (as can their foundations) make their 990s and audit report (including, dare I say, even the management letters!) available online, post their learning questions and how they will try to answer these questions for discussions using a wiki, engage their donors, clients, board, volunteers, anyone who wants to really, on a journey of discovery and learning. Foundations and donors ought to support sincere efforts to learn and improve, rather than punish nonprofits for failures to meet their learning goals. Once burned for being honest, many nonprofits will just go back to answering the questions the way they think donors want them to.
3. Knowing how to learn is vitally important to nonprofit effectiveness. Nonprofit groups, both staff and boards, and their donors, would benefit enormously from intensive educational efforts to teach how to learn about their effectiveness.
We need processes that recognize and honor the fact that there is nothing harder to do in this world than try to change people and communities. No one, not a software company or a large national organization should presume that they know how to do that. I have reviewed and evaluated literally thousands of nonprofit programs over the past fifteen years and never once have the intended outcomes been the same – even for programs doing what appears to be the same thing to an outside observer. How an afterschool group or food bank or mentoring program or environmental advocacy effort goes about its work is wonderfully unique to that particular organization in that particular community, literally as unique as the people they are serving, and this should be celebrated not cookie-cuttered away. We should embrace learning and stay away from checklists.
[Disclosure: I write this, hopefully with some insights, having founded and run a nonprofit organization, Innovation Network, for over a decade, that has the purpose of helping nonprofits and foundations measure their results. In other words, been there, done that for more years than I’d like to admit.]