Skip to content

Data and Dialogue

2011 June 6

At the end of this week, salve I’ll be in Berlin, viagra sale speaking and above all listening at the Berlin Civil Society Centre. Their question for me is ‘how can we best link impact measurement with organizational learning?’.  I’m a bit flummoxed at this stage about what else to say that hasn’t been said many times before. We’ve discussed notions of learning and definitions of impact. We know the tensions between attribution and contribution, between expert assessment and participatory monitoring, between standardized policy recommendations and adaptive policy-making. ‘Theory of change’ has never been more mentioned. And of course the method mania and tug-of war persists. In this process, there is some interesting stereotyping and labeling happening, recent examples of which can be find in this address by Carlsson, Sweden’s Minister for International Development Cooperation.  Common reactions are to assume that resistance to certain forms of measurement is the same as assessment allergy, and demonstrates lack of interest in ‘the truth’.

But increasingly the elephant in the room, the killer assumption, is that as long as the data is rigorous, we’ll know and act accordingly. Baloney. Evaluations inevitably include contentious findings and are often ignored (a point stressed by Carlsson, by the way, in her speech). Savedoff comments on five recurring objections to expensive impact evaluations, including the non-linearity between evidence and policy decisions. But then concludes rather obtusely ‘Certainly more can be done to encourage the use of evidence in policymaking but this is a case where more and better supply can, I believe, have an indirect and ultimately decisive influence over the course of time.’ How? Trickle down, trickle across, float up?  Learning and uptake needs to be planned for, it won’t ‘just happen’.

Last week, I was listening to staff of one unit within a bilateral aid agency discuss their approach to living with messiness in fragile state interventions. They recognized that the past is not always a good indicator of what is useful for the future, and that limited or no success at times is part of the deal – totally at ease with complexity thinking. And, guess what? They were not using this as an excuse to ‘not measure’.  They need to report on what seems to be working of the many small grants they give, they need to know for their own decision-making, learning their way forward in murkiness and with little time, not only to show ‘value for money’ within their own agency.

How refreshing that they were not hypnotized by method but focused on questions and process. They had multiple processes where people met and shared and debated and decided. Okay, perhaps there were too many processes they said, but listening to them, I was intrigued that they were not hung up on data as god – from whatever methodological tradition (and there were several in the room). They took sense-making seriously, as well as communication of findings. (Interestingly though, they did not recognize this as evaluative practice, demanding of themselves other ‘real’ monitoring and evaluation systems.  But that is another discussion.)

And this is where I think we can make considerable advances, reconnecting data with dialogue. Evidence is too divorced from sense-making – rigorous thinking about what information is useful needs equally rigorous thinking about how to digest it and pronounce on what the data says, and its degree of certainty.

Yet aspects of uncertainty rarely figure in evaluations. Stirling and Scott offer an interesting perspective that might help shape the contours of debate for the Big Push Forward. They urge moving from a focus on risk to accepting incomplete knowledge: “Instead of seeking definitive global judgments about the risks of particular choices, it is wiser to consider the assumptions behind such advice – since these are central in determining the conditions under which the advice is relevant. Above all, there is a constant need for humility about “science-based decisions.”

Stripping away the bells and whistles of data collection methods and tools, I’m keen to explore what makes for rigorous sense making and the role of assumption analysis. Perhaps more of the evidence being requested, designed, collected, synthesized, will then get used.

6 Responses
  1. June 15, 2011

    Thanks Rick, Tina, Juliette and Bob for the thoughtful comments. I fully recognise the perversenesss that Tina refers to – the difficulty of being frank in this day and age of ‘transparency’ and accountability. I can share a few tales of this myself, as we all can.

    I hear the word of caution about terms like ‘assumptions’ and ‘sensemaking’. However, we’re not in the luxurious position at the moment of being able to say there is too much deliberate sensemaking (or have loads of paid time for this) and too much work on assumptions being done. Both dimensions of understanding what works and why get far too little attention. And the fact that both are difficult is all the more reason to flesh out these ideas.

    I’m finding the term ‘sensemaking’ useful to start a discussion about ‘so what’ after data collection and it being more than just (statistical) analysis. It leads to more focused discussions so far than the generic ‘reflection’ or ‘thinking’ that is sighed away as navel gazing. In this day and age of rigorous data collection, we need an equally strong understanding and practice of rigour in sensemaking. At the very least discussing this makes people perk up and start asking ‘hmmm, well, how DO I assume that we make sense of this diversity of findings?’. And who is involved? And what/whose values count in the process?

    ‘Assumptions’ is a bit more tricky as this term has been used for a long time, of course, in the logical program theory mode, with assumptions often being things like ‘War won’t break out’, ‘Harvests won’t fail’, ‘partner NGOs will fulfil their part of the deal’. But we need to get much better at this – its the missing middle bit of the evaluation discussion. Surfacing what you don’t even know you don’t know is, needless to say, tough.

    The third item that is linked to two of the thematic clusters is ‘communication’, what Juliette’s comment is on. How do we do that in ways that are affordable, nuanced, and useful?

    I’d love to see more discussion on differentiated ways of sensemaking and ditto for assumptions. Thanks again!

  2. June 15, 2011

    I want to pick up on the excellent comments of Rick and Irene. Just like Rick’s comment that “sensemaking” risk being downgraded into some kind of synonym for thinking (the concept of “reflection” suffered the same fate), I think we have to be careful with the idea of “assumptions”. I’ve been interested in the notion of exposing and exploring assumptions for many years. I’ve come to the conclusion that it is one of the hardest things to do well and needs considerable technical dexterity and a very clear notion of what it is you are trying to do. Otherwise you can easily end up with useless generalisations about the state of the world or huge laundry lists of factors. There are various methods around that can help you escape the worst. My personal favorites are the Rand’s Assumption-based planning, and Mason & Mitroff’s Strategic Assumption Surfacing and Testing (SAST), but the trick is to find one that works for you and your situation. Happy to discuss this further.

  3. Juliette Majot permalink
    June 10, 2011

    First, thank you for this blog and the very thoughtful comments above. I am wrestling with the knock on effect in advocacy organizations of the trend of funders toward measurable results, and what that means for advocacy evaluation — and advocacy itself. This comment then does take off on a related, but slightly different track.

    It is no secret that often the use of log frames, SMART tables, and even many efforts at theories of change are developed at the advocacy organizational level primarily for the purpose of establishing a framework for funder mandated evaluations that are essentially symbolic in nature at the level of the organization. Though widely recognized, the problem remains, impacting not only evaluation usefulness, but also the validity and value of many internal strategic processes weighed down by it all. This plays real havoc with how meaningful evaluation can be at the advocacy organization level, what data is truly legitimate, what findings actually matter, and how to manage the gap between what a funder finds useful in terms of data and analysis (or at least thinks they find useful) with what is useful to the advocacy organizations actually carrying out the work being evaluated.

    I bother to take on the gnarly processes of evaluation because I honestly believe evaluation to be capable of working for the work. This is an ethical matter from my perspective. I’ll come back to that.

    My approach has been to attempt to use the entire evaluation process, from initial design, through presentation as a window for rich dialogue and inquiry at the level of the advocacy organization about what actually makes sense to them and why. This is a discussion that builds toward more developmental and internalized evaluative thinking. It comes as no surprise that in my experience, advocacy clients find and create high levels of process use. Why? Because this is where data and dialogue are, indeed, linked up. And as for those detailed, data driven reports complete with executive summaries, findings, conclusions and countless annexes? Pretty much one or two uses. Submit to funder. Submit to board. (Perhaps I state this too strongly. There can be and often are, useful and insightful discussions at the drafting stage, but with limited numbers of staff and more than a little frustration on their part that they will essentially remain alone in their newly found evaluation appreciation).

    The dilemma of reporting, or the process of conveying evaluation findings and results in what is conceived as some sort of culmination of process often leaves me facing an ethical dilemma. Reporting some form of conclusion serves as a driver for the process use that yields such positive results. Written reports are the deliverables mandated by funders. But is this use of limited resources an ethical one in service of social change? Are processes necessarily designed to culminate in a brick- of-a- report, processes that actually work for the work? Or is it possible to build better and more effective evaluative approaches within the advocacy community? I believe it is and that it is our imperative to do so.

    The end of my slightly off track comment. Why did I not just send this into an advocacy evaluation specific blog or site? Because I find the broader conversation relevant and useful, and will continue to say tuned in. Many thanks for the opportunity to chime in.

    Juliette Majot

  4. Tina Wallace permalink
    June 7, 2011

    I enjoyed this contribution and want to pick up on a couple of aspects.
    The first really worries me – not only do many agencies (donor and INGO) not respond well to critical issues arising from evaluations, they actually refuse to accept them in a number of ways- including intimidating some evaluators into changing their text or changing it themselves, or not using the full report and writing their own highly ‘edited’ version.
    Increasingly the experience of ‘feeding back’ interesting, complex and challenging findings to senior staff is becoming a problematic moment. People can find their integrity impugned, their methodology dismissed, their findings queried…the contentious nature of this moment in evaluations is becoming much more intense; partly I suspect because INGOs and others have to ‘promise’ so much from their work and ‘proving success’ is what gets rewarded – financially and in terms of inclusion in important forums etc- so there is little incentive for embracing what can be defined as ‘bad news’.
    There are many issues wrapped up in here, not least ‘who owns’ the data and who can use it and how; what the role is now of external people collecting the evidence people say is so important and yet is so often difficult to accept or address? The kinds of debates good data and analysis should promote between the external team and the agency are becoming rare as evaluations themselves get increasingly tight in their TOR and the time available for learning from the reports is so limited.
    It is a conundrum that in an era when evidence and learning are called for all the time it has never been more difficult to collect, share and openly discuss this evidence…..both consultants and agencies could learn so much more if time for reflection and discussion was built into the feedback but this is very rare.
    Irene has put her fingers on many critical issues…the one I am really struggling with is what role does learning, evidence, reflection have in the current aid paradigms and the dominant narratives – related to results, proving success to tax payers, single issue approaches and short term horizons. One issue for me is how to engage in a debate about why these issues are dominant and what happened to so much past learning (and indeed evidence) about what actually enables positive change on the ground for poor women and men.

  5. June 7, 2011

    Re “living with messiness in fragile state interventions” and “moving from a focus on risk to accepting incomplete knowledge” your readers might be interested in a very interesting essay in Prospect, June 2011 titled “What should we expect of our spies?” subtitle..”What counts as success?” I have summarised some its contents and have included a link to the full text of the original essay here: Interestingly, the author concludes with a similar emphasis to yours on the need for dialogue and interaction as a way of resolving/dealing with some uncertainties.

    I do think it would be better to talk about dialogic methods, rather than seek to seperate dialogic processes from monitoring and evaluation methods, quite a few of which involve some form of interaction between people. And I am wary about over-use of the word sensemaking, useful as it is. There is a risk that it becomes something like the word “emergent”, eventually meaning something different to each person. Are there types of sensemaking processes that could be differentiated?

    Re the killer assumption, I think we are faced with two problems/risks, not one. One is someone might have what we think is quite solid and accurate evidence but not be able to communicate it and use it effectively to influence policies and programmes. The other is that someone might be able to communicate and effectively influence policies and programmes, but with evidence that we dont think is well founded. While the former may represent a waste of resources and lost opportunities, I am inclined to think that the latter might present more dangers, because changes are affected.

    [I know this leads us into the territory of “Well, what is solid evidence?” and I agree there is plenty of dispute around much evidence that is claimed to be solid. But most of those involve in such debates generally still hold onto to the idea that there are some evidences that are more solid than others]

Trackbacks and Pingbacks

  1. Contradictions and Gaps in the UK Results Agenda « The Big Push Forward

Comments are closed.