Funding evaluation: what's going on here?

evaluation
funding

(Stacey McDonald) #1

Yesterday I had the pleasure to participate in a panel at ONN’s NonprofitDriven conference that discussed the findings from the State of Evaluation in Ontario report released this week. I was asked to share one finding that stood out to me. While I wasn’t very surprised that financial resources as identified as the top barrier to evaluation in the report, what shocked me was how few organizations received any dedicated funds for evaluation (28%).

And yet I had heard from a colleague that on average, OTF applicants only ask for about 4% of their application budget for funds towards their evaluation, despite being able to ask for more (up to 10% of the budget can be for evaluation). I decided to take a deeper look at the numbers. It turns out that for the past 3 years, about 30% of Seed and Grow applicants do not include funds for an evaluation in their budget. An additional 30% ask for under 3% of their budget (probably less than they need).

So I asked the audience, help me understand what is going on here? Organizations don’t have enough money for evaluation, but they’re not asking us for (enough) money either? So, i’m putting the question back out to you, Knowledge Centre users, what is going on here? Help OTF understand.

For the record, OTF wants to support organizations to engage in evaluation. We believe it can be an important tool to help organizations learn, understand the value of their work, and share a more fulsome story of their work.


(Phil Nowotny) #2

Hi Stacey,

I read the report with great interest too and am one of the lucky 14% who work for an organisation with >=1 evaluate FTE!

As a previous grant writer, when you submit an application, you want to demonstrate value for money but often estimate at the conservative end to win the grant. Evaluation costs aren’t cheap so inflate your proposal. In addition, organisations with less experience will have struggles getting a solid estimate as they don’t know the method, length, intensity etc and you don’t get quotes easily with the turn-around you need.

My humble suggestion is to make it mandatory between 3-10% to keep everyone at par.

Happy Monday!
Phil


(Stacey McDonald) #3

Thank you so much @pnowotny for sharing your insights. I hope you (or anyone really!) can share a few more thoughts on this matter.

On the first point, I hope that sharing more openly OTF’s willingness to support evaluation will encourage organization to ask for the funds they need to engage in a quality evaluation.

On the second point, would providing a tool or resources to help organizations better create a budget for their evaluation be helpful? It is difficult to develop a good budget without a clear plan, and it’s likely that organizations are not developing a detailed plan at the time of writing a grant, so I see the merit in your suggestion of allocating a minimum amount. What do others think about this idea?


(Kris Erickson) #4

Anecdotally (i.e., based on my own perceptions working with different partners in both community arts and workforce development), there is a good deal of misunderstanding around what “evaluation” ought to entail. In addition to Phil’s observation about an organization’s incomplete knowledge about what tasks are entailed in an evaluation, I think there may often be a misperception around the purpose of evaluations—for instance, that they are about performance management and demonstrating quantifiable organizational efficiency rather than about process and outcomes, and iteratively building an evidence base for continuing, shifting focus, or scaling up.

I may be wrong, but I think that the misperception that evaluations are used primarily to judge, rank, and/or ultimately value an organization and its employees is prevalent; less common is the recognition of their place in growth, maturity, and engagement.

As to the second point of your recent post, Stacey: I think resources, yes, but those that are just so good that won’t get lost in the shuffle. A webinar probably won’t do, at least not alone: a kind of hands-on boot camp on evaluation for EDs, managers, board members, or any key organizational stakeholders might work. Cost prohibitive, perhaps, but possibly a great educational outreach strategy.


(Stacey McDonald) #5

Thank you so much for sharing your thoughts on this @kris. We have been considering supporting more intensive training. If you have more thoughts about this that you’d like to share, please do!


(Kris Erickson) #6

I have lots of thoughts on this, in no small part because I am trying to develop my professional practice in this area and, as a junior evaluator and researcher, finding barriers to paid work despite significant interest from a number of parties in a range of institutional contexts!

I think the ONN’s recent “State of Evaluation” report [PDF] summarizes many of the key issues related to organizational concerns and/or misperceptions quite well:

  • on-going, periodic conversations between funders and funded organizations (“fundees”—not a word I approve of) are more effective than intensive training blasts (arguably at least);
  • it’s imperative but difficult to “free up” staff time to support evaluation efforts;
  • it is widely thought that the work of evaluation may strain relations between and amongst organizations and their stakeholder/constituent/clients;
  • lack of capacity impacts effective use of external (vs. internal) evaluators, despite the strong evidence amongst organizations that utilizing external evaluators was a good idea (and they would do it again) …

In short, however, I think the solution is a pedagogical rather than informational one; that is, I think changing organizational practice (i.e., capacity building) through facilitated workshops will be more effective than simply knowledge transfer through an exclusively communications or publications strategy.

I’m keen to hear more about the planned intensive training; I don’t mean to sound only like a dissenting voice—I really would like to be constructive.


(Stacey McDonald) #7

Be as critical as you feel is necessary @kris! I appreciate your thoughts about this, specifically on your thoughts on a solution (facilitated workshop regarding organizational practice). Tell me more!

On your first point from the report, I see a lot of value in on-going conversations about evaluation between the funder and the funded organization in that it would lead to multiple opportunities to discuss different challenges and opportunities along the way. What I struggle with is this: do those conversations need to happen with someone that is pretty knowledgeable about evaluation or not? Is this about advice, learning, knowledge exchange, or just ensuring it’s a top of mind consideration?

The staff time issue is very valid, and really a bigger issue about operational costs and budgeting, so i’m going to acknowledge it, but not seek to tackle it right now.

On the third point, I was surprised by how few organizations in the survey identified the strain in relationships as a barrier to evaluation. Are you saying that more people assume that this is the case, and then shy away from evaluation?

The training plans are still in their infancy, and due to budgetary constraints at the moment need to be rethought. I was thinking that the training would be more focused on skills (or knowledge transfer as you state above). Again, could you expand on what you mean by organizational practice? were you thinking more about incorporating evaluative thinking, or something else?


(Andrew Taylor) #8

I think Phil is correct. Although OTF allows applicants to include money for evaluation in their budgets, nonprofits are trying to offer good “value for dollar” in their proposals, and so they may still feel that allocating money for evaluation may make them less competitive. It may be hard for nonprofit leaders to justify earmarking resources for evaluation, when their board members are acutely aware that other aspects of their organizational infrastructure are also underfunded.

Perhaps we need to prompt people to think more broadly about how “evaluation” resources can be used - in ways that helps to strengthen the organization as a whole.


(Stacey McDonald) #9

Thanks @AndrewTaylor for sharing your thoughts. I want to take the opportunity to share something here that I mentioned to you last week, and might interest others.

When i was looking over OTF’s data around funding evaluation, I saw that while about 30% of OTF Grow applicants didn’t ask for funds for evaluation, the proportion for organizations that received grants was significantly lower (if I remember correctly 10%). That means applicants that asked for funds for an evaluation were more likely to get a grant than those that didn’t. Now, I’m not saying the lack of evaluation funds is what caused certain grant applications to be declined, but what I am saying is that it’s not a barrier.

That being said, I absolutely agree with you that organizations should consider more broadly the how evaluation can be used in to strengthen and advance their work and their organization.


(Kris Erickson) #10

@smcdonald - great, thanks!

In response to your questions:

do those conversations need to happen with someone that is pretty knowledgeable about evaluation or not? Is this about advice, learning, knowledge exchange, or just ensuring it’s a top of mind consideration?

I have read this thread as being about shifting practice, not simply conveying knowledge, in which case this is a matter of—to borrow the lingo of corporate learning and development (L&D)—designing learning experiences that account for a multitude of concrete impediments to changing behaviour, both personal/professional and organizational. So while, yes, someone knowledgeable about evaluation needs to be involved, in my opinion the conversations should also include those knowledgeable about the very impediments to uptake and implementation themselves: namely, of course, organizational leaders at a variety of levels and durations of experience.

A text like Julie Dirksen’s Design for How People Learn will offer insight to the considerations that will (again, in my opinion) need to be made: how to teach for knowledge and retention (principles, strategies, and tactics of evaluation), how to improve performance (application of knowledge in evaluative activities), and how to improve motivation (how to support cultures, both internal and external to an organization, in embracing evaluative thinking in their specific practice). However, I think a more concrete session or series of themed sessions designed around a project- or problem-based learning (PBL) approach could be highly productive, and generative for subsequent instruction/outreach, since the PBL approach puts learners themselves in the hot seat to identify and resolve a problem scenario—in this case, one to do with, say, stakeholder resistance to program evaluation. A skilled facilitator in PBL is needed to help stay focused, and one knowledgeable about evaluation can also provide the requisite content knowledge to aid the learners.

In terms of straining relations with constituents/clients, I think a more nuanced consideration is needed that the report does not offer; to me, the spread of responses to “Collecting measurement and evaluation data sometimes interferes with our relationships with the people we serve” (on p. 11) is telling, suggesting that the real answer is sectorally-specific: “it depends.”

In terms of the training question, I mean that (again, in my opinion) practice needs to shift across an organization: not just organizational leaders, but Boards and even constituents/clients (wherever possible), need to enter into evaluative thinking. If the culture is to shift, I think multiple stakeholders—those in varying positions of organizational power, with varying degrees of experience at an organization and across organizations—need to be considered in formulating evaluative frameworks. Their roles may differ, certainly and necessarily, but I think involvement needs to be as inclusive as possible.

Finally, while I agree with your assessment of skills training as being needed, I would just reiterate that skills cannot be developed through knowledge transfer alone: there needs to be a shift in performance, underwritten by knowledge and understanding. There’s the difference between awareness and competency, the latter of which requires a demonstration of skill while the former does not.


(Stacey McDonald) #11

Thank you so much @kris for sharing your insights. There’s so much here for us to think about. I"m going to check out Julie Dirksen’s book right away.

I truly appreciate that you’ve taken the time to share your thoughts.


(Paul Bakker) #12

@smcdonald Thanks so much for sharing that finding. I would think that presenting an organized plan and being able to provide confidence in the effectiveness of what is proposed is what leads to successful proposals. Evaluation should help an organization be effective and provide grant reviewers with confidence.

I think the budget line that Evaluation most often goes under is management, and historically, nonprofits were encouraged by rating organizations to keep overhead, admin, and management costs low. Most of those ratings organizations have now realized that impact is what matters, and starving nonprofits of needed organizational capacity doesn’t help them be effective.

If OTF can show more stats demonstrating that you fund effectiveness rather than the lowest overhead costs, it will go a long way for the sector, including encouraging more budgets for evaluation.

I hope you will consider publishing something on your findings in something like Nonprofit Quarterly


(Stacey McDonald) #13

Thank you @PaulBakker. I hadn’t considered publishing something more substantial, but will consider it. I’ll do a bit more investigating, and perhaps start with sharing additional thoughts here.