Mitigating Bias in Data Collection

tip-tuesdays
bias

(Jennifer Looi) #1

One tool for mitigating bias in the evaluation world is randomization. In these situations, chance alone decides who participates/is evaluated. When applied in a large group, it can help to minimize the impact of differences that could arise from selection bias in an otherwise mixed pool of participants.

A good example of this is the way the customer satisfaction surveys are distributed to those interacting with OTF. We could imagine that among those who have interacted with us, there are a mix of declined and approved grant applicants, those who like and dislike the way we operate, etc. By allowing for anonymity, and by randomly selecting participants in outreach events, or those registered in Catalyst, etc. we can get a better diversity of voices than, say, having each OTF staff person talk to 5 people they’ve interacted with. Participants are less likely to feel pressured to respond a certain way, and we are less likely to hear exclusively from grantees who are probably quite happy with getting funding.

What other methods could be helpful in mitigating bias before and during data collection?