Last week I met with my team to look over results from some of our grants, and to explore the value of using Mission Measurement’s Impact Genome Project to help us learn what works. The Project quantifies the “genes” of nonprofit programs and academic research to discover what works - it breaks down the different characteristics of each project to try to then understand which ‘genes’ are more likely to be present in successful programs, with the hopes of being able to predict in the future which programs are more likely to be successful.
As we were looking over the analysis they provided us for a small subset of our grants, I saw a lot of value in this approach, but I also saw much that worries me.
The whole approach seems to pull together results and context by identify demographic characteristics of participants and setting, but it actually separates them. It assumes that the reasons for success or failure of a program are linked to specific ‘genes’ – but what if they’re not? What if it’s all (or in large part) about the context? Who’s running the program? Do they have lived experience and can relate to the program participants? What if the implementation matters just as much as the design? I’m not saying that learning more about the design isn’t important – and I do think their approach might help us do that. I just worry that in the quest to understand what works, we might oversimplify things and forget that context matters.
This brings me to my bone to pick about their evidence hierarchy, and how it suggests that program statistics are better than qualitative research. Statistics on their own are not better! You could have pre/post-survey results, but on their own you have no idea why those results happened. It’s only with supplementary research/inquiry do you know anything about why those results are what they are. No change (between pre and post results) doesn’t necessarily mean the program design (with that combination of ‘genes’) has no value. Just like you have can a course with the same length, place, and syllabus that is great one year, and awful the next, what makes it a great course or not often relies on the teacher’s skill.
What I’d like to see instead of placing one kind of inquiry as better than another is the acknowledgement of the value of both, and that both are needed. If you want to read more about how context matters, I suggest you check out Data Feminism, specifically chapter 5: The numbers don’t speak for themselves.