Evaluating Your Pilot: Setting yourself up for success

webinar
pilot

(Stacey McDonald) #1

Last week I hosted a webinar to provide some tips, as well as some things organizations may want to consider when testing out/piloting a new program, especially things to keep in mind to help build evidence that can be used to get the support needed to grow or scale up a program. Here’s the recording:

There were also lots of questions, and I wasn’t able to get to all of them, so I’ll get to them in this discussion thread. For those of you that prefer to see the slides with notes (instead of listening to me!), click here: Evaluating Pilots Webinar - revised.pdf (1.4 MB)

Have any other questions or feedback? Share them here!


(Paul Bakker) #2

Hi @smcdonald, I took a quick skim through your slides. They look great.

At what level of evidence will OTF fund scaling efforts?

The feasibility of using a relevant comparison or control group varies by programs and their contexts. For example, it is hard to find an appropriate comparison group for programs to prevent criminal recidivism, as youth not in the program that are involved in crime typically don’t want to be observed. Often it is the programs serving the most marginalized (i.e. the homeless) that are the hardest to evaluate using quasi-experimental methods. If funders only fund programs with a better ability to used (quasi) experimental methods, then they may be under-funding programs for more marginalized, hard-to-serve populations.

I am a fan of (quasi) experimental methods, and strive to use them if possible, but I do worry about only funding programs evaluated using such methods.

Again, OTF’s approach isn’t clear to me from the slides, but would love to hear more from you about it.


(Stacey McDonald) #3

Hi Paul, like so many things, the answer to your question about the level of evidence that OTF requires for scaling efforts is, it depends.

I agree completely that it isn’t always feasible to use comparison or control groups. Organizations are not required to use quasi-experimental designs. Not only are they not always feasible, quasi-experimental designs are not always desired.

OTF funds all types of different work, so let me try to explain my earlier response of “it depends.” For Seed grants (small one year grants primarily in support of testing new ideas), no real evidence is expected, just a coherent and sound plan to try something new.

For Grow grants, it depends of the program or model to be implement was developed internally or externally:

  • Externally: organizations that decide they are going to replicate or adapt a program or model developed by another organization should be replicating a well researched and documented approach. When this is the case, they can rely on the research already completed by others. Their evaluation plan also needn’t use a quasi-experimental design - they could instead focus learning on the implementation, and understanding if they are seeing results that match expectations.

  • Internally: Following the stages of evidence that I included in the deck, if an organization has developed a new program, and has only preliminary results of positive change towards an outcome (supplemented perhaps by outside research or findings that support its approach), then OTF will support its efforts to expand the implementation, and invest in an evaluation to further understand if it is leading to a positive change.

When i say preliminary results, I do mean preliminary. An organization should show results from past evaluations, and that there are results that suggest positive changes related to a specific outcome (aligned with their grant application). But if organizations haven’t engaged in an outcome evaluation, then this isn’t possible for them. In these cases, we also offer organizations to rely heavily on external research that corroborates their approach. I would encourage these organizations to spend some time prior to submitting a grant application on scoping out how they could build evidence through evaluation, and request the appropriate funds to do this as part of their grant application.

Does this clarify? Or just confuse? FYI - clarifying this very question is currently on my workplan!


(Paul Bakker) #4

Hi Stacey,

That definitely clarifies, and is an appropriate approach. It still places value on the confidence that quasi-experimental methods can provide, while providing flexibility to fund where such methods are not feasible or desirable.

I greatly appreciate the clarification, and am happy to confirm that is OTF’s approach.


(Stacey McDonald) #5

Many great questions were asked during the webinar. I did my best to answer them at the time, but I didn’t get to answer to all of them. I’m going to start responding to them here, as well as expand on questions already answered.

Q1. Can someone look over your evaluation plan with you on the Knowledge Centre?
A1. Yes! If you’d like to share your draft plan here, I would be happy to look it over, and others would also be able to provide feedback.

Q2. Would partnership development with groups who can help evaluate be a legitimate expense?
A2. I’m not entirely sure what “partnership development” was meant to refer to in this question. However, all funds needed to carry out an evaluation are legitimate expenses. All funds requested for the evaluation need to be explained clearly in the budget, and the corresponding activities should also be listed in the project plan. I feel that partnership development here could be one of three things: 1) funds needed to hire, or compensate a partner that might help develop an evaluation plan, tools or measures; 2) funds needed to bring together a group of organizations to develop a shared evaluation plan, tools and measures; or 3) funds needed to engage stakeholders (beneficiaries, community members, etc.) as part of the evaluation. All of these would be legitimate expenses. If the person that posted the question meant something else, please follow up!

Q3. Say your pilot is just one of a few programs that your organization provides that all desire a similar outcome for your population. How would you best determine which program is causing the desired outcome?
A3. To be able to understand if a pilot caused a change or outcome, you would need to do an evaluation using an experimental (randomly assigned control and test groups) or quasi-experimental design (comparison group design). For more on these research designs, check out Sage Publications’ chapter on Reseach Designs. Also see the discussion above and this post acknowledging that these designs aren’t always possible, or even desirable. It’s much easier (but still not always easy) to understand if a change occurred, and then try to understand a specific program’s contribution to that change. However, if you are running multiple programs that all have a common desired outcome, then you could develop some common measures and evaluation tools (such as a common survey, or common interview questions), and then analyze and compare the results. Do people have better outcomes with some programs than others, with a combination of programs, with a certain level of participation? Also explore if there are any other plausible explanations for the results you are seeing (other services, changes in the community, etc.).

Q4. Can you give any advice for getting a diverse team to work together to create a strong evaluation process? Also, who should ‘own’ the responsibility for overseeing the evaluation in an ideal world - recognizing that everyone has a stake, but someone needs to keep the ball moving and help keep a team accountable.
A4. While many evaluations are carried out by a team, there usually does need to be someone assigned with keeping the ball moving, as you say. Who on your team has the interest, and ability (time & skills) to play this role? It needn’t be the person with the most experience with evaluation, but rather the ability to coordinate the work, bring team members together, and keep things on track.
I’m also going to make a suggestion here to consider engaging team members outside of staff. It can be very beneficial to include a variety of stakeholders.

Q5. Is there help to evaluate the data you collect? Sort it and decipher your results?
A5. Generally, there are three options: 1) a staff member or volunteer has an interest in evaluation and learns these skills (either through a course, series or workshops or free online materials); 2) you hire an evaluator or researcher to help you do this; or 3) you see if there is a community partner that can assist you (academic institution, social planning council, health unit, etc.). Ideally, you would have funds earmarked for your evaluation within your project budget (in the future, request funds for this work as part of your grant request), but if that’s currently not possible, there are options.

Have more questions? Post them here!