During the convenors’ reflection on the recent Politics of Evidence conference we wondered whether more nuanced power analysis might help us break out of unhelpful linear aid chain mentalities related to results and evidence. The case studies presented at the conference suggest that if we are to make the results and evidence agenda more supportive of transformational social change we need to move away from the idea that the politics of evidence is all about visible power. Images of monolithic all-powerful donors placing unreasonable evidence and results demands on well-intentioned, powerless recipients are not very helpful. Many of the experiences shared suggest we need to get more adept at identifying how hidden and invisible power influence the use of results and evidence artefacts in different contexts by individuals from different cultural backgrounds and who possess varying capacity and confidence. Such an understanding might enable us to develop more politically savvy strategies and tactics to harness useful aspects of the results and evidence agenda whilst mitigating the risks of it being used in ways that could contradict transformational development aims.
Conference convenors were delighted that on the 23rd and 24th April we were able to bring together so many thoughtful and engaged development professionals. They came from across the globe, including those working on the ground, in head offices, in consultancies and research institutes.
The Politics of Evidence conference provided an opportunity to share and strategise for people working on transformative development, and who are trying to reconcile their understanding of messy, unpredictable and risky pathways of societal transformation with bureaucracy driven protocols. They have struggled to make sense of the shifting sands of the results agenda – seeing the wisdom in some aspects while actively questioning its less useful, sometimes damaging, manifestations and consequences.
We designed it to make the most of participants’ experiences and ideas and everyone had the chance to share these in the conference break out groups, including documented case studies from about a third of the participants. As Lawrence Haddad comments in his blog yesterday on the conference, power pervaded these stories. We hope that their engagement in such an interactive conference process will have given participants courage and confidence to adopt and develop further the potential strategies and tactics (developed in the break out groups and shared in the final plenary session) to make possible programming and evaluative practice fitting for transformative development.
Over the next month or so – while the conference report is being finalised – the convenors will be blogging about some of the key issues and challenges that the conference threw into relief. Then, we plan to start work on a book that will explore these issues further, including contributions from some of the conference participants.
The Big Push Forward convenors aimed to throw a stone into a pond to make ripples. We hope these ripples will continue to expand outwards. Meanwhile, by September the current group of convenors will be stepping down in the hope that others come along to throw in more stones – either as the BPF or in some other form. Contact us if you are interested!
Day One of the Big Push Forward’s Conference on the Politics of Evidence reflected, I suppose, much of what might have been expected from it. There was a great deal of pushing back and forth about ideas and philosophies, rich discussions, a soupcon of frustration, some positivity and a lot of interest in taking some of the ideas into the second day to talk about strategies.
Since a blog cannot hope to convey the discussion, I’ll restrain myself to some threads which I felt stood out:
- Measurement processes, tools, artefacts, can all be positive or negative: it’s not about the artefact – although discourses can grow around them, and some discourses can push in one direction or another – but about the interpretation of the artefact. It’s about the detail of their implementation, and the people and the relationships involved in bringing them in and communicating them.
- Agency, tai-chi and ju-jitsu: people reflecting on their own positions are not just automatons within a relentless machine. There is agency, and there are possibilities to shape the directions of organisations and the way organisations – or the people they work within – understand the world through measurement and evaluation processes. It’s just that sometimes a little tai-chi – or possibly ju-jitsu – is needed to turn people around.
- Disjunctures in scale: that some of the measurement techniques, when used to evaluate interventions and to convince at the level of general policies, do not necessarily work at the level of individual projects. RCTs, for example, are purpose designed for scale-up, but that may not be the case for many of the project evaluations at a local scale.
- Ownership: fundamentally, one of the biggest concerns articulated was about ownership of the evaluations, and who are they for. It’s about programme staff whose projects become strangers to them, or those we are seeking to support who felt themselves robbed of voice in the face of evaluations, experimental design, and the power of evidence.
- There is no Big Bad Wolf proposing mindless tools to do people down: there are repeated, deep, systemic issues in play, coming from a fragmented and highly political environment, dealing with difficult problems. Everyone in the room had their own philosophies and their own ways of pursuing development aims within that system.
These are of course just some personal reflections and take-aways. Tomorrow, the sessions are focusing on whether we can come up with strategies for opening the space for fair assessments for development in this complex system.
The Big Push Forward convenors welcome our hundred and ten participants and student volunteers. For all those unable to attend, this evening, Brendan Whitty will be blogging about that day’s highlights and don’t forget tomorrow we are streaming live our final session.
Two papers have been prepared for the Conference and these are now on line. Rosalind Eyben’s Uncovering the Politics of Evidence and Results disentangles the historical threads and origins of results-based management and evidence-based policy/programming discourses. She discovers a strong ‘family resemblance’. Both assume that evidence pertains largely to verifiable and quantifiable facts, and that other types of knowledge have less or little value; both have a particular understanding of causality, efficiency and accountability. The paper looks at how and why these discourses have entered and influenced the development sector and who is promoting them in which contexts What has been the effect on the sector’s priorities and practices, and particularly its capacity to support transformative development?
Arguing the importance of being critically aware of how power sustains and reinforces the results-and-evidence discourses, Rosalind examines how these discourses generate artefacts (tools and protocols) such as log frames and theories of change that shape our working practices. When hierarchical ways of working block communications and dialogue, the artefacts trigger perverse consequences but their power is neither uniform nor constant. Analysing the politics of accountability and the sector’s internal dynamics, Rosalind suggests there is room for manoeuvre to expand and enable more transformative approaches to results and evidence within the sector.
Brendan Whitty’s paper, Experiences of the Results Agenda, paper analyses the data from an online survey, which invited visitors to the Big Push Forward website to give their perceptions of the impact of the results agenda on their working lives. Brendan analyses the very different experiences and interpretations of the respondents as revealed through both 153 responses to the quantitative survey and 109 qualitative stories. The study discusses the day-to-day practice of small-e evidence –results and targets in management of specific projects – rather than large-E evidence of establishing broader development policies. The stories are about the nuts and bolts of the development processes and artefacts – the theories of change, results frameworks, reporting requirements and value for money rubrics. It is about what ‘e’ is being collected, how it is used, and to what effect.
Respondents disagreed about the effects of these artefacts. The contradictory perceptions seem to be often in tension. Thus learning is often seen to be(?) in tension with accountability; capturing the complexity in evaluation with harmonisation and reductionism; coordination of partners with constraining their freedom to adapt. How these tensions are resolved and the perceptions play out seems to be dependent on how the artefact is communicated, managed and tailored to its context. The fit appears to be important: the fit of the artefact to the existing systems and capacity of the organisation, and also the fit of the artefact to the specifics of the intervention (e.g. its complexity, the number of partners). Finally, perceptions of an artefact seem to be affected both by staff’s’s own circumstances and their relationship with others. The survey data suggests that those in M&E and management roles, who benefit from better data and more resources for their priorities, tended to be more positive than those in project implementation and mid-level roles.
During the conference we will be exploring these ideas and testing these intepretations. Come back tomorrow for the deliberations of Day 1,
This week our Politics of Evidence conference gets underway – and we celebrate our second birthday. We are delighted there has been so much interest and sorry that we have inadequate space for all those keen to attend. For those unable to attend, we are live streaming the last session of the conference (15.45- 17.00 UK time on April 24). We hope to post the conference report on the website at the end of May and have longer term plans for a book.
The conference coincides with the second anniversary of the birth of the Big Push Forward. We posted our first blog on the 26th April 2011. Our aims have stayed consistent – helping create the political space to ensure appropriate approaches in the design, monitoring and evaluation of projects with transformative aims. This is necessary for the international development sector to continue to seize opportunities to support transformation for greater social justice. read more…
In my experience, the results agenda is not only emotional in the sense of controversial, but also confusing to many people, NGO staff I work with in Africa, Asia and Germany have difficulties with the concept of results, and much goes wrong. Arguably a lot of the trouble stems from a strong utilitarian influence on the results agenda that does not fit well with other cultural traditions involved in development aid.
Utilitarianism is a philosophical tradition that started in Britain in the 18th century. It deals with the question of how to act morally, and what government action is morally best. Put simply, in a utilitarian view, human behaviour is the more moral, the more it creates happiness. In the words of Bentham, “it is the greatest happiness of the greatest number that is the measure of right and wrong”. The utilitarian idea has been very influential in Britain, and more widely in the Anglo-Saxon world. The push for effectiveness builds on this tradition. “Happiness” is now made to be understood as “results”. Governments are “effective” (read: moral) if they produce lots of “results” (read: happiness). To make effectiveness measurable, results should be pre-defined. I am not sure if the architects of the results agenda are aware of their utilitarian background, but we are all heavily influenced by our traditions, and the forerunners of the results agenda (New Public Management, micro-economics and the logical framework concept) are dominated by North American thinking building on utilitarianism. People from other traditions just do not understand the underlying assumptions and are confused. Being German myself, I have observed that German development agencies found it rather difficult to introduce results frameworks. They experienced a lot of resistance from staff, and people were confused for a long time. They disliked the added bureaucracy that comes with the current results concepts. But, I believe, underlying is that the Anglo-Saxon results concept does not fit into German culture.
Many Germans, particularly in the social and cultural sciences, are brought up in very different philosophical traditions than Anglo-Saxons. Two philosophies of German origin are particularly relevant to the effectiveness debate. read more…
At the Politics of Evidence conference we will be discussing how certain approaches to accountability may undermine the sector’s potential to support transformative development. Payment by Results (PBR) is one to watch out for. But we have been here before.
As Europe and North America industrialised and proceeded to colonize the rest of the world, the positivist power of numbers appeared to tame uncertainty in an era of such rapid change. In Britain, the fondness for measurable facts led the introduction of ‘payment by results’ (PBR) into elementary schools in the middle of the 19th Century. PBR (aka Cash on Delivery) is when commissioners of services (e.g. a government) pay the service providers only after a pre-determined result has been achieved and independently verified. The logic of PBR is that there is a manageable level of risk in achieving the result and that service providers must be incentivised to play a more active role. 150 years ago – like today – the buzzwords were efficiency, value for money (VfM), competition and a balanced budget. At the time PBR was criticised for its mechanistic approach that impeded children’s educational development and sacrificed long term benefits for short term achievements. By the end of 19th Century PBR was abolished partly due to the increased bureaucracy and administration costs of verifying the results. PBR had been proven to be inefficient! Fast forward to 2013 when ‘Public bodies seem to be pursuing the use of payment by results with the vigour of a drunk in search of the next bottle of alcohol’, Jon Tizard writes. read more…
Speakers at IDS and ITAD’s recent launch of the Centre for Development Impact provoked interesting reflections related to evaluation and the politics of evidence. Bob Piccotto, a former Director General of the World Bank’s Independent Evaluation Group, gave an inspiring keynote speech calling for a multi-disciplinary, ‘beyond aid’ evaluation agenda driven by the desire to tackle inequality and contribute to social justice. I found myself wondering whether this could be considered a call for a more reflexive, political evaluation agenda with a new politics of evidence, and whether the development community is equipped to respond. read more…
The Politics of Evidence conference will be exploring how people are engaging with problematic practices and protocols, and what alternatives they have found to create spaces for approaches more aligned with transformational development, and which serve their learning purposes. I believe we can learn a lot about this from the experience of ‘front-line’ practitioners who are often subtly playing the game to change the rules.
It would seem that most people believe that ‘mixed methods’, amongst other things, are essential in order to make sensible judgements about the effectiveness of development interventions. This would include agencies such as the World Bank, as well as evaluation specialists. DFID has recently commissioned an important study entitled ‘Broadening the Range of Designs and Methods for Impact Evaluations’, which sought to ‘establish and promote a credible and robust expanded set of designs and methods that are suitable to assess the impact of complex development programs’.
Now whilst there is still a great deal of debate about whether there is really a commitment to mixed methods in practice, there are also a number of challenges with implementing ‘mixed methods’ approaches. This post focuses on these issues. read more…
One problem we face in the evidence debates is the use of single examples to assert a generalization or uphold certain positions. This led us to organize the crowdsourcing of experiences as input to the Politics of Evidence conference. With 150 stories shared online and 70+ to be debated at the conference, more grounded discussion becomes possible to generate nuanced insights about ‘the politics of evidence’ and nudge us beyond simplistic yes/no positions.
I’ve been taking a fresh look at my own work with a ‘politics’ lens and see it in small and larger forms in many nooks and crannies. Below are some recent work situations from the past five months in which I am directly involved. They illustrate how I see the ‘the politics of evidence’ playing out in designs and decisions around what Brendan has described as the (small and big) E of evidence– the results and target oriented issues, and those that relate to evidence of what works. read more…