Glass apparently a little more than half full: initial reflections on survey
The BPF position on the results agenda has been that the glass is half empty, discount rather than half full. It’s hard to quibble with the desire to increase effectiveness, cure or that evidence can be a useful tool to that end. However, from a managerial perspective we were worried that the results agenda could – depending on how it was applied – devolve into mechanistic insistence on meaningless numbers and downward pressure on operations budgets, rather than generating opportunities for informed decisions, learning and adaptation. From a power-sensitive standpoint, we also thought that the agenda’s insistence on certain forms of knowledge and evidence would reinforce existing power relations, exclude, divide politically and disempower. For those of us committed to transformative development, we feel there are good reasons to worry that the glass is likely not only to be half empty, but cracked and dripping.
Meanwhile we observed a mismatch between the negative feedback we were hearing informally on the results agenda and the discussions taking place in public. We therefore decided to put out a survey to canvass a wider range of views on the results agenda. The survey (link here) had a combination of four multiple-choice questions eliciting reactions to the impact of the results agenda, and its positive or negative outcomes for achieving development missions, as well as space for richer snippets and stories.
Although the survey will remain open until late February, we thought it useful to discuss the feedback we’ve got so far, from the multiple-choice questions only.
- More positive responses than negative: In signalling the agenda’s impact on their ability to fulfil their mission, significantly more were positive (45.0%) than negative (26.3%) or mixed (25.0%). For the agenda’s impact on their ability to learn, the division was even more pronounced: 51.9% positive, 29.6% mixed and only 8.6% signalled a negative response.
- A clear impact: 9.9% said there was no change to their ability to learn, and only 3.8% saying no impact on ability to complete mission. When asked to rate the extent of impact on their daily work (rating from: none, mild, some, considerable, extensive and fundamental change) the responses rose to a peak at ‘considerable change’ (42.3%) then fell away.
Among ourselves we have been discussing what these numbers mean:
Positive BPF convenor: Well, my grumpy friend, it would appear we have been unduly sceptical, miserably suggesting the glass is half empty. The numbers are more positive than negative. Let us instead throw our qualified support behind the agenda.
Grumpy BPF convenor: Humbug. First, it’s not all positive. There’s actually a majority who believe there are at least some negative consequences to their ability to deliver the mission. Second, all it shows is the views of some of the readership of the website – just under half of the respondents were M&E people and we’ve known for a while that they like the agenda because it gives them a lever for taking M&E seriously within their organisations. Senior management made up another 13% or so, and of course they’re positive [editorial note: relating to their ability to achieve the mission, they were 81.8% positive, 18.2% mixed, with no outright detractors; n=11] since it equips them with information and therefore power. But what we can’t be sure of is whether we’ve caught the view from the trenches. Perhaps they are too busy filling in reports and forms, and are somewhat crushed by the system that they don’t have the time space or energy to do our survey?
Positive: That may be so, but from the breakdowns of respondents, those programme staff that did respond [n=10] were mostly positive too. In fact, if you look at the breakdown, it’s mostly researchers who have responded overwhelmingly negatively – gloomy Cassandras like us – and that may be because research is notoriously difficult to get good impact assessments for. It’s scarcely typical. Well, no more!
Grumpy: As for the researchers being miserable (which I admit we are), it’s because we’re more aware of power and more used to analysing from a critical standpoint. We believe the results agenda reinforces existing imbalances and constrains transformative development. Of course we’re miserable. Second, do we really think everyone here is interpreting the questions the same? We don’t have much context unless we look at the narratives too.
Positive: That may be so. The survey numbers remain low (between 72 and 81 completed the questions), so we can’t conclude too much from this. We’ll see what else comes in, and look at the stories too.
We’d be interested to get reflections or interpretations from our readers on the viewpoints and interim results, and please do continue to circulate this survey (which closes at the end of February)!