Challenge: Evaluation 101 in 30 mins. What would you cover?

I’m working on an Evaluation 101 presentation/webinar for folks that are fairly new to evaluation. If you had only 30 mins, what would you cover?

This is what i’m thinking:

  1. what the heck is evaluation?
  2. why do it & how can it help you?
  3. introduce general stages

Would you or anyone you know find this helpful? Let me know. Is something so high level even helpful? Would love some input!


I would stress the idea of actionable evaluation - how can the metrics you select create useful knowledge that can support good decision-making and ultimately improve your outcomes? For example, counting how many people come to your events is one thing, but knowing what impact of the event is another. It all depends on deeply understanding what change you are trying to make, and how you will evaluate whether or not you are successfully contributing to this change.

I think that selecting metrics requires consideration of three key things:

  1. Usefulness: creation of useful managerial information to allow for ongoing program improvements.
  2. Transparency: Easy to communicate/transparent
  3. Cost: Level of effort to maintain over time

Love to hear more about others thinking on this!


@mpickering thank you so much for sharing. I agree 100% about actionable, useful evaluations. I hadn’t considered touching on costs, but that’s a great point. Thanks for the suggestion!

1 Like

Definitely helpful for the community! I’ve found that some folks only give it minimal attention.

I would try to communicate that Evaluation is important because it can document and illustrate program benefits (and challenges) and be useful for building upon program improvements in the years to come. Also, evaluation is a powerful tool to present verifiable program impact to potential funders and other stakeholders. Lastly, I would want to share that the data collected must be analyzed, and interpreted in an easy to understand way, and shared widely so that others can benefit from the learning.


I’d be interesting in what types and how money is spent would love to see presentation when it is finished


@Bryan_Cook thanks for weighing in! I’d be happy to share the presentation when it’s done.

I completely agree about touching on the different types of evaluations and their different objectives. However, I actually didn’t plan to spend time on the budget. I thought that if you were new to the topic of evaluation, it might be better to start with an intro about what it is, and why do it. I’m happy to reconsider it. Can you tell me a little more about why you feel that’s a priority?

Also, my colleague Sarah previously developed a webinar on that topic (it’s recorded on our youtube channel here), so if questions come up, I planned on referring people to that material.

1 Like

Thanks checked it out this was exactly the type of thing I was looking for. Do you know if Internal expenses could be current employee salaries that one of the few things I’m confused about it.

1 Like

Is there a copy of the power point presentation available anywhere?

@Bryan_Cook The presentation is not online yet, but we’re working on it. Check out OTF’s website resources section in a week or two.

I’m out of the office today, so i’ll have to get back to you about your other question tomorrow.

Hi @Bryan_Cook. Thank you so much for asking questions, and please continue do to so, but in this case I’m not sure about the answer. Someone from OTF’s support centre would be better able to answer questions about the budget. Also, if you wanted to speak to a program manager, someone from the support centre can book a one-on-one meeting for you.

Please reach out to our support centre by email at or by phone at 1.800.263.2887 or 416.963.4927

I love the idea of measuring the impact on people, as opposed to just measuring the number of people involved. (as mentioned above by Mary pickering) So what I’d like to learn more about is: how do you measure impact? My organization uses pre- and post-event surveys to measure knowledge gained; are there other, more effective methods? What do funders prefer to see?


Late to the party but I would definitely include something on the good and bad of data gathering/ capture. Also maybe intro different tools on a spectrum of intensity.

Good luck!


@tinashe I’d love to hear your thoughts on this. When you say ‘good and bad data gathering’ do you mean the challenges, or advice on doing it well (or things to avoid to not do it badly)?

I’d love to see a 101 presentation, especially one coming from OTF, to focus on how to have conversations with funders about adapting, evolving, and completely changing reporting outputs and metrics as a project evolves. It might seem like this is a complicated topic to introduce in a 101, but I think that creating the expectation that evaluation plans are likely to change and evolve after grants have been issued is incredibly important to ensuring useful evaluation practice over the length of a project. I find that one of the main barriers to making evaluation useful on the ground is the absence of funders from the evaluation user group and/or the exertion of influence over evaluation direction if they lack the human resources to be a full participant in the evaluation process.

Hi @Adam_Fearnall. This is such an interesting topic. I didn’t include this in my first presentation of the 101 material i’m developing, but it actually came up as a question during the webinar. I feel like this is a much bigger topic, but it’s obviously a concern for many.

I can only speak to OTF’s approach. We ask most grantees to collect one or two metrics (some are also required to use a standardized survey), and that’s because we need to understand on a larger scale the impact of our work. We also need to demonstrate to our funder the value of our work. Those aren’t (by and large) negotiable. However, grantees can ask for funds to engage in their own evaluation as part of their budget, and the purpose of that evaluation is entirely to their discretion. If tactics change, and therefore the evaluation needs to be updated, then that’s entirely within the power of the grantee to do so (with the caveat that once the grant is made, we can’t change the budget, so additional evaluation funds from OTF aren’t possible). The one thing that wouldn’t be ok to change is the outcome of the work (aligned with OTF’s grant result). This should remain constant.

Generally, I would add that as a funder, we want our grantees to succeed, since you’re the ones doing the work that’s so important for so many communities. We understand that things change as the work evolves, so please talk to your program manager.

I hope this was helpful. It’s definitely making me re-consider if the topic should be added to the 101 presentation, so thanks for weighing-in!

This is really interesting Stacey. I don’t often hear a funder talk about its funder and the way that the relationship between those two entities impacts the evaluation that people receiving grants have to do. That’s an interesting layer because it allows me to empathize with the position that OTF is in (because it’s similar to the one that I see organizations in, in relation to OTF). I really like the sentiment that you express in your answer about trying to be flexible with the evaluation designs for each project. That said, I have to admit — I see restrictions on allowing grantees to shift budgets, reallocate funds, etc. as a result of what they may learn through a formative, utilization, developmental, or principles focused evaluation (amongst other approaches) — as a significant barrier to those on the ground. It can be hard for projects to ask really challenging, impact driven questions when they know that their answers may lead them to a place of seeing a real need to reallocate resources and not having the support of a funder to do so. More than anything, I think that it’s interesting to think about whether a funder is an evaluation user or a stakeholder and based on that answer whether it might be reasonable to ask a funder to sit at the table for evaluation design at the project level. Anyway, this is a big conversation for sure and one that I know many in the sector are interested in having. It’s a big scary one though because it involves a lot of recognition of power imbalances between participants in the conversation. Maybe this forum can be a place to start to have it because I think that having it could really unlock another level of openness, honesty, and impact in the sector.

1 Like

Hi @Adam_Fearnall i agree that this is a really important conversation to have. I would love it if this could be a space for that. I don’t want this to get buried in this thread. Would you like to post a new discussion topic with this?

Hi @smcdonald - thanks for the insight on this topic. I’m off doing some travelling just now and don’t have consistent enough internet access to start off a new discussion topic. I’ll leave it to you to seed the conversation when the time is right. Thanks!

1 Like

My number one piece of advice would be “don’t wait till the end.”–Build evaluation, data collection etc in from the beginning.


I would include some content on how to involve and treat evaluation participants, especially program beneficiaries. For example, what are the various ethical competencies that evaluators require to 1) ensure no harm is done on evaluation participants, 2) evaluation participants are fairly represented and 3) that evaluation findings are utilized for direct benefit.