In general, comparisons help people make more informed decisions. That’s why comparing nonprofit performance is such a major part of the program analysis work we do in the Social Impact Research (SIR) department of Root Cause. We believe it’s imperative to compare analysis results of nonprofits to help them see how they are doing amongst their peers in the field and learn what they can do to improve.
While comparisons are valuable, figuring out how to compare nonprofit performances is fraught with challenges. You can’t compare apples to oranges. Comparisons must be made between similar things for them to be accurate and valuable. Likewise, there is no value in comparing nonprofits of different sizes, program structures, and participants. Different nonprofits have different models and expect different outcomes. Therefore, trying to compare them to one another is unfair and potentially paints an inaccurate picture of their program performance.
SIR recently completed a pilot test of a new report that will help nonprofits understand their performances relative to standard benchmarks and their peers in the field. This report compares nonprofits on their program models, how well they implement their models, and the outcomes that their participants achieve. This represents the evolution of the program analysis work that SIR has been doing for the past four years. We are putting nonprofits in a particular field—starting with youth career development—to the test and getting down to the nitty-gritty of what they do, how well they’re doing it, and how aligned their programs are with the identified best practices in the field and the nonprofit sector. Through this process, nonprofits can find out and show off what they’re excelling at, understand which areas need to be improved, and get a sense of how they are performing in comparison to their peers in the field.
To ensure that the comparison data is fair, we have divided up the programs according to the types of young people they serve. Programs serving participants with more severe challenges tend to be structured differently and offer different services from programs serving participants with less severe challenges. Both types of programs may be considered in the same field, but they are tailored to fit the needs of the participants they serve. Additionally, the more severe the challenges participants face, the harder those participants will be to serve—and the more likely the program is to have lower outcomes than programs that serve participants with challenges that are easier to overcome.
Two nonprofits serving different populations cannot be compared in the same group or be expected to achieve the same outcomes; while the structures of their programs may be similar, the specific services they offer need to be different to best serve their unique set of participants. Because of the different levels of challenges their participants face, the programs will achieve different outcomes. This is why SIR has created comparison groups into which it categorizes all the programs that it analyzes. These groups provide the nonprofits the opportunity to see how their programs are doing in comparison to similar programs that serve similar populations.
The same is true for all direct service nonprofits, regardless of the social issue. And whether it is SIR making comparisons for analysis, performance measurement, and improvement purposes or donors trying to decide which organization they want to support, it’s vital to compare apples to apples.
As an organization that is committed to excellence and places high value in nonprofit performance measurement and comparison, SIR has made a significant dent in the comparison puzzle. But we haven’t solved it yet—there remains work to be done to ensure that we are getting this right.
If you would like to discuss or learn more about the comparison groups for youth career development or the process we went through to create these comparison groups, please feel free to contact me at email@example.com