In advance of next month’s Conference on the Politics of Evidence, recent weeks have found me speaking at meetings in Abuja, Geneva and Oxford. Meeting participants were country programme staff, M&E specialists and head office programme/policy people in international NGOs and United Nations agencies. What was the mood?
First of all relief among many participants to be in a collective space where it’s OK to talk about how they feel about what is happening to them – and to find that others were experiencing similar emotions of frustration of the impact of the results and evidence agenda on their jobs. ‘All I do now is write reports’ said one. ‘I make up numbers’, said another, bitterly. There is anger at the absurdity of ‘targetitis’ and the proliferation of tools and protocols. People mentioned being forced to develop Theories of Change that ignored the ‘politics of how’ and the sheer stupidity of a large global programme that has 300 indicators for measuring change. The long-standing anxiety was aired about how alternative pathways of change are ignored by the logical framework single linear cause-effect proposition. read more…
At the end of February we stopped taking stories from the crowdsourcing survey, having received a total of 151 responses and over 100 individual stories and snippets of experiences. We’ll be working them up into a paper to share at next month’s Conference, but here are a few observations as a trailer.
A key take away message for me about the exchange on evidence with Chris Whitty and Stefan Dercon on the From Poverty to Power blog was the challenge of communicating complex ideas simply, particularly, the difficulties of clearly explaining the importance of different ways of looking at the world. In her recent blog Cathy Shutt pointed out some great comments on this, notably the tweet from @ScepSec ‘Not sure what troubles me most. The twitterati’s inability 2 see the message or Roche & Eyben’s inability to express it’. Inadequate communication can result in those with different perspectives ‘talking past each other’, as suggested by Duncan Green in his summary of the debate.
Understanding different perspectives is important not only to get to grips with ‘evidence’ but also to assess how successfully ‘wicked’ problems are being tackled. For example, understanding the issue of violence against women requires inquiring into the perspectives and world-views of those involved – including victims and perpetrators, as well as citizens, legislators, governments and donors. And doing this well requires an understanding of epistemology i.e. what kinds of knowledge are considered valid, as well as methodology i.e. how this knowledge can be generated. read more…
The crowd-sourcing survey for experiences of the results agenda finishes on 28 February. We have already had over a hundred experiences shared.
Evidence on the effects of the results agenda matters, but you need evidence to discuss this! What is happening that is helping your work, and where is it hindering your ability to contribute to transformational development? The nuance of grounded experiences is critical to move beyond unhelpful stereotyping and straw men debates (see Duncan Green’s summary of the wonkwars on evidence).
We need more experiences to start understanding better what is happening from all different corners of international development practice. The survey is designed to identify where the results agenda is showing its best side and where it hurts. What is working, what doesn’t, and under what conditions? The findings will frame discussions on Day 2 at the BPF conference. The collection so far illustrates cautionary tales and stories of good practice. Please add more!
All experiences will be treated confidentially and will contribute to the analysis that will be presented at the conference. Remember – 28 February is the last day to share how the results agenda is affecting your work. Please add more!
Who is it for? Development practitioners who work towards changing the power relations and structures that create and reproduce inequality, injustice and the non-fulfilment of human rights, and who are interested in the current debates about the positive and negative effects of the ‘evidence’ and ‘results’ discourses on locally-owned and transformational development. We define development practitioners as people working in the international development sector – in a bilateral/ multilateral agency/ international NGO, as staff member or consultant; or employed by a government department or a non-governmental organisation in a low-income or middle-income country that receives financial aid. Or s/he might be located in a university/ think tank partially funded by the sector, a private sector consulting company or a philanthropic foundation. read more…
I have been reflecting on the recent ‘wonkwar’ (debate amongst experts) ‘about ‘the political implications of evidence based approaches’. The debate generated considerable interest and activity in the twittersphere; some brilliant comments and quite a large poll that yielded perhaps unsurprising results?
Excellent comments by several people that I cite here, reminded me that the Big Pushforward’s agenda extends far beyond the realm of evaluation practice and is broadly concerned with the production and use of knowledge by the development community. Several remarks also emphasised that the Big Push Forward needs to articulate its position on ‘evidence’ more clearly
‘Not sure what troubles me most. The twitterati’s inability 2 see the message or Roche & Eyben’s inability to express it’
It became apparent our critique of the ‘discourse of evidence’ was being read as being anti-evidence! Nothing could be further than the ‘truth’. We are critiquing the use of the term ‘evidence’ as a particular discourse that influences what is thinkable and do-able in the politics of development. Inequities in power and voice mean that only some approaches to knowledge are judged acceptable and when these are called ‘objective’ it makes them unquestionable. read more…
In a current visit to Bolivia I had a conversation with an old friend and colleague, Rosario León about her thoughts on the Results Agenda – from the perspective of her experience as a scholar activist, consultant and part of the NGO community that has been heavily financed by international aid. Her perspective seems to be missing so far from the crowd sourcing we reported on last week. We started our conversation by reference to her ‘Story from Aidland’ that details her battles with results artefacts and their effects on her colleagues and their relationships with their partners. It explains how she and her colleagues tried to meet the requirement to produce measurable results, while simultaneously using the aid machine’s funds to support the social changes that were occurring as the marginalized people of Bolivia started to claim their citizenship rights and assert their cultural identities. Rosario and her colleagues increasingly found themselves caught between the everyday realities of working in the local communities, and the incongruous bureaucracy of annual operating plans along with the dictates of remote donor organizations. The effects of this ‘double life’ meant some felt like actors adopting a language and a set of tools – technical activity reports, expenditure reports and products – quite distinct from the work they were actually doing. It was part of a mutually constituted chain of hypocrisy that Rosario outlined to me. read more…
The BPF position on the results agenda has been that the glass is half empty, rather than half full. It’s hard to quibble with the desire to increase effectiveness, or that evidence can be a useful tool to that end. However, from a managerial perspective we were worried that the results agenda could – depending on how it was applied – devolve into mechanistic insistence on meaningless numbers and downward pressure on operations budgets, rather than generating opportunities for informed decisions, learning and adaptation. From a power-sensitive standpoint, we also thought that the agenda’s insistence on certain forms of knowledge and evidence would reinforce existing power relations, exclude, divide politically and disempower. For those of us committed to transformative development, we feel there are good reasons to worry that the glass is likely not only to be half empty, but cracked and dripping.
Meanwhile we observed a mismatch between the negative feedback we were hearing informally on the results agenda and the discussions taking place in public. We therefore decided to put out a survey to canvass a wider range of views on the results agenda. The survey (link here) had a combination of four multiple-choice questions eliciting reactions to the impact of the results agenda, and its positive or negative outcomes for achieving development missions, as well as space for richer snippets and stories.
Although the survey will remain open until late February, we thought it useful to discuss the feedback we’ve got so far, from the multiple-choice questions only.
We hope the story that follows will be the first of several case studies from Big Push Forward supporters about how results and evidence artefacts are used in the politics of evidence. It shows how the methodological utility of an artefact has to be discussed in relation to the process and context of an artefact’s uptake. In this case the politico-managerial environment is one in which hidden and invisible power determine which knowledge counts and hierarchical ways of working (in both donor and recipient organisation) block communications and consultation.
Anonymity in this case has been protected through changing the policy question that was to be reviewed and by providing pseudonyms for the donor agency and the research organisation that feature in the account. The author calls himself Joseph K, because, as he emailed us, he felt like the character of that name in Kafka’s stories that are full of rumour and hearsay and where there are no villains. Everyone stumbles around as in a fog, in what a commentator on Kafka calls ‘murky fields of imposition’. The good news is that this case study is less gloomy than Kafka. Resistance is not always futile and there is solidarity that makes space for push back. Before the case, we summarize the debate about systematic reviews and draw some conclusions fromJoseph K’s story.
A 2012 brief from ODI explains that Systematic Reviews (SRs) help answer the question of ‘what works’ in international development policy and practice, a question ‘ever more important against a backdrop of accountability and austerity.’ SRs ‘are increasingly considered a key tool for evidence-informed policy making’. USAID, DFID, CIDA, the World Bank and AusAid are among the development agencies promoting their use. Most SRs have drawn on experimental and quasi– experimental studies of the effects of particular – very specific – development interventions. In the Journal of Development Effectiveness Hugh Waddington and others stress systematic reviews may be less appropriate in relation to broader, less specific issues. Mallet and others (also authors of the ODI brief) emphasize SRs miss context and process, important in international development as compared with medical research from where SRs originate. Birte Snilstveit and others make the case for applying the systematic review principles of transparency, comprehensiveness and systematic rigour to broader policy questions provided the review is more open, interpretative and exploratory. This kind of balanced academic discussion about systematic reviews is not what happened in the following narrative which tells of a lost year of confusion, anger, argument and resistance, involving much energy and time for those involved.
Thus the conclusions this case offers for the Big Push Forward are: read more…
We are inviting case studies as evidence about the politics of evidence for sharing at next April’s conference Please send us your experiences. If you prefer your name not to be published, we will help you edit the story to protect the anonymity of people and organisations. Go to end of this post for details.
The discourses of ‘results’ and ‘evidence’ are becoming increasingly prominent in the planning,, appraisal, implementation and monitoring and evaluation of development projects and programmes ‘Evidence’ is about proof of ‘what works’ (or doesn’t work) in tackling a problem. Evidence leads to action – intervention or treatment of the problem – that delivers ‘results’, reported upon and possibly also evaluated. The two discourses share a particular understanding of causality, efficiency and accountability, originating in and still more prevalent in countries with an anglo-saxon empiricist tradition. However, through the dominance of English-language based global institutions such as the World Bank, the terminology and related practices are spreading widely within the international development sector and into external aid receiving countries.
Results and evidence discourses shape our working practices through what we are calling ‘artefacts’. Artefacts are the protocols and procedures, created to implement the ideas embedded in ‘results’ and ‘evidence’ discourses. A well-known artefact is logical framework analysis. Artefacts become powerful when used as incentives (carrots) and mandatory requirements (sticks). Just as the boundary between the results and evidence discourses is fuzzy and shifting, so the use of these artefacts can change in relation to time and context.By and large, however, results artefacts are used today to plan, implement, monitor and report for accountability purposes e.g.: read more…