@smcdonald - great, thanks!
In response to your questions:
do those conversations need to happen with someone that is pretty knowledgeable about evaluation or not? Is this about advice, learning, knowledge exchange, or just ensuring it’s a top of mind consideration?
I have read this thread as being about shifting practice, not simply conveying knowledge, in which case this is a matter of—to borrow the lingo of corporate learning and development (L&D)—designing learning experiences that account for a multitude of concrete impediments to changing behaviour, both personal/professional and organizational. So while, yes, someone knowledgeable about evaluation needs to be involved, in my opinion the conversations should also include those knowledgeable about the very impediments to uptake and implementation themselves: namely, of course, organizational leaders at a variety of levels and durations of experience.
A text like Julie Dirksen’s Design for How People Learn will offer insight to the considerations that will (again, in my opinion) need to be made: how to teach for knowledge and retention (principles, strategies, and tactics of evaluation), how to improve performance (application of knowledge in evaluative activities), and how to improve motivation (how to support cultures, both internal and external to an organization, in embracing evaluative thinking in their specific practice). However, I think a more concrete session or series of themed sessions designed around a project- or problem-based learning (PBL) approach could be highly productive, and generative for subsequent instruction/outreach, since the PBL approach puts learners themselves in the hot seat to identify and resolve a problem scenario—in this case, one to do with, say, stakeholder resistance to program evaluation. A skilled facilitator in PBL is needed to help stay focused, and one knowledgeable about evaluation can also provide the requisite content knowledge to aid the learners.
In terms of straining relations with constituents/clients, I think a more nuanced consideration is needed that the report does not offer; to me, the spread of responses to “Collecting measurement and evaluation data sometimes interferes with our relationships with the people we serve” (on p. 11) is telling, suggesting that the real answer is sectorally-specific: “it depends.”
In terms of the training question, I mean that (again, in my opinion) practice needs to shift across an organization: not just organizational leaders, but Boards and even constituents/clients (wherever possible), need to enter into evaluative thinking. If the culture is to shift, I think multiple stakeholders—those in varying positions of organizational power, with varying degrees of experience at an organization and across organizations—need to be considered in formulating evaluative frameworks. Their roles may differ, certainly and necessarily, but I think involvement needs to be as inclusive as possible.
Finally, while I agree with your assessment of skills training as being needed, I would just reiterate that skills cannot be developed through knowledge transfer alone: there needs to be a shift in performance, underwritten by knowledge and understanding. There’s the difference between awareness and competency, the latter of which requires a demonstration of skill while the former does not.