Skip to content

Glass apparently a little more than half full: initial reflections on survey

2013 January 11

The BPF position on the results agenda has been that the glass is half empty, discount rather than half full. It’s hard to quibble with the desire to increase effectiveness, cure or that evidence can be a useful tool to that end. However, from a managerial perspective we were worried that the results agenda could – depending on how it was applied – devolve into mechanistic insistence on meaningless numbers and downward pressure on operations budgets, rather than generating opportunities for informed decisions, learning and adaptation. From a power-sensitive standpoint, we also thought that the agenda’s insistence on certain forms of knowledge and evidence would reinforce existing power relations, exclude, divide politically and disempower. For those of us committed to transformative development, we feel there are good reasons to worry that the glass is likely not only to be half empty, but cracked and dripping.

Meanwhile we observed a mismatch between the negative feedback we were hearing informally on the results agenda and the discussions taking place in public. We therefore decided to put out a survey to canvass a wider range of views on the results agenda. The survey (link here) had a combination of four multiple-choice questions eliciting reactions to the impact of the results agenda, and its positive or negative outcomes for achieving development missions, as well as space for richer snippets and stories.

Although the survey will remain open until late February, we thought it useful to discuss the feedback we’ve got so far, from the multiple-choice questions only.

  • More positive responses than negative: In signalling the agenda’s impact on their ability to fulfil their mission, significantly more were positive (45.0%) than negative (26.3%) or mixed (25.0%). For the agenda’s impact on their ability to learn, the division was even more pronounced: 51.9% positive, 29.6% mixed and only 8.6% signalled a negative response.
  • A clear impact: 9.9% said there was no change to their ability to learn, and only 3.8% saying no impact on ability to complete mission. When asked to rate the extent of impact on their daily work (rating from: none, mild, some, considerable, extensive and fundamental change) the responses rose to a peak at ‘considerable change’ (42.3%) then fell away.

Among ourselves we have been discussing what these numbers mean:

Positive BPF convenor: Well, my grumpy friend, it would appear we have been unduly sceptical, miserably suggesting the glass is half empty. The numbers are more positive than negative. Let us instead throw our qualified support behind the agenda.

Grumpy BPF convenor: Humbug. First, it’s not all positive. There’s actually a majority who believe there are at least some negative consequences to their ability to deliver the mission. Second, all it shows is the views of some of the readership of the website – just under half of the respondents were M&E people and we’ve known for a while that they like the agenda because it gives them a lever for taking M&E seriously within their organisations. Senior management made up another 13% or so, and of course they’re positive [editorial note: relating to their ability to achieve the mission, they were 81.8% positive, 18.2% mixed, with no outright detractors; n=11] since it equips them with information and therefore power. But what we can’t be sure of is whether we’ve caught the view from the trenches. Perhaps they are too busy filling in reports and forms, and are somewhat crushed by the system that they don’t have the time space or energy to do our survey?

Positive: That may be so, but from the breakdowns of respondents, those programme staff that did respond [n=10] were mostly positive too. In fact, if you look at the breakdown, it’s mostly researchers who have responded overwhelmingly negatively – gloomy Cassandras like us – and that may be because research is notoriously difficult to get good impact assessments for. It’s scarcely typical. Well, no more!

Grumpy: As for the researchers being miserable (which I admit we are), it’s because we’re more aware of power and more used to analysing from a critical standpoint. We believe the results agenda reinforces existing imbalances and constrains transformative development. Of course we’re miserable. Second, do we really think everyone here is interpreting the questions the same? We don’t have much context unless we look at the narratives too.

Positive: That may be so. The survey numbers remain low (between 72 and 81 completed the questions), so we can’t conclude too much from this. We’ll see what else comes in, and look at the stories too.

We’d be interested to get reflections or interpretations from our readers on the viewpoints and interim results, and please do continue to circulate this survey (which closes at the end of February)!

6 Responses
  1. Benedict Wauters permalink
    February 12, 2013

    I would agree with siome of the observations above about the unclarity of what is ment with the “results agenda”. If it just refers to: “set targets, use carrots and sticks to make sure they are achieved”, which seem to be the core of many approaches taken (usually pushed by consultants who adore its simplicity) , then I think you may be getting some more negative responses.- about what this means in practice. If it refers to a much broader concern about achieving something real, then there is considerable scope of overlap with what you call transformative development and the positive echoes may be relating to that. In any case, I have been trying to develop an approach that provides for a higher level synthesis. It should be ready by the conference and i’d be delighted to get feed-back on it.

    • Rosalind Eyben permalink
      February 12, 2013

      Hi Benedict
      Thanks for this! However, I am not sure we should lay the blame on consultants for “set targets, use carrots and sticks to make sure they are achieved” approach, There are stronger political drivers at work that I have analysed in the background paper I have prepared for the conference and that I will be sharing next month with those registered for the conference. On the other hand, I look forward very much to learning about your approach and hope you can make it to the conference in April.

  2. Brendan Whitty permalink*
    January 28, 2013

    Thanks for the comments and observations – we’ll be doing the full analysis and coding of the experience snippets people have provided us for the Politics of Evidence conference in April. We’re up at around 100 stories, so it may take a little while, hence the only partial taster for the time being. Quick responses to some of the questions:

    Gillian: “Was there any additional information re: the nature of this considerable change? Be fascinated to know.” – That’s also going to come out of the full analysis, drawing on the stories people have been giving us.

    Gillian: “When I completed the survey, I must admit to feeling somewhat frustrated about the use of the term ‘results agenda’ as if it is a) a homogenous object, clearly understood; and b) either good or bad… The problem is that the results agenda is tied in practice to a very limited view of ‘results’, based on a particular paradigm of what counts as evidence.”

    – Thanks for this. In designing the survey, we were back and forth on the use of the term ‘results agenda’, and in the end stuck with it since it does signify a loose bag of reforms with some commonality. Any more precise definition would be debatable to some extent, and would stop us from seeing what development professionals sees as reforms belonging to this idea. Still, we take your point.

    Donna: “Another factor may be respondents’ organisatonal affiliation…” We do have information on the nature of the organisation, and the full analysis will pull this out as a factor.

    Kate: “I wonder if there’s any link between people’s views and the length of time they’ve worked in development?” Thanks for this – it’s definitely a possibility. We’ll see if it comes out of the stories, but we haven’t got systematic information on this.

  3. Gillian Fletcher permalink
    January 22, 2013

    To quote: ‘When asked to rate the extent of impact on their daily work (rating from: none, mild, some, considerable, extensive and fundamental change) the responses rose to a peak at ‘considerable change’ (42.3%)’.

    Was there any additional information re: the nature of this considerable change? Be fascinated to know.

    When I completed the survey, I must admit to feeling somewhat frustrated about the use of the term ‘results agenda’ as if it is a) a homogenous object, clearly understood; and b) either good or bad.

    In principle, I am supportive of greater effectiveness and much of the rhetoric of the Accra Agenda for Action. I still believe that the majority of people involved in development are involved because they want to see results, and in my experience most people feel frustrated about their/development’s ability to achieve results, and would like to see greater effectiveness. The problem is that the results agenda is tied in practice to a very limited view of ‘results’, based on a particular paradigm of what counts as evidence.

    Maybe you can share a rough coding of some of the case studies, to give an idea of what people saw as positive outcomes of the results agenda?

    Best,
    Gillian

  4. Donna Loveridge permalink
    January 22, 2013

    Thanks BPF for the early analysis of the survey results.

    A number of influencing factors may affect attitudes: the summary highlights respondents’ positions and Kate mentions the degree of experience in international development.

    Another factor may be respondents’ organisatonal affiliation – do they work for a donor, development consultancy, recipient/partner organisation (public sector, NGO, private). I cannot remember what the specific questions on organisational affiliation were in the survey but perhaps it would be interesting to see if there is any variation according to this factor.

  5. Kate permalink
    January 15, 2013

    I wonder if there’s any link between people’s views and the length of time they’ve worked in development? Someone who’s only worked in development during the 2000s might find it harder to know how the results agenda affects their work or to imagine alternative way of operating. That might make them more positive.

    Alternatively, if the results agenda has brought benefits, those who have worked in development for longer might be the more positive as they would be more aware of problems under previous approaches.

    Of course this might not have any effect at all – just thought I’d throw it into the thinking!

Comments are closed.