The UK’s Independent Commission on Aid Impact’s latest report looks at DFID’s approach to delivering aid impact. The report draws on ICAI’s past reports and other research to explore how well DFID ‘goes about ensuring meaningful and sustainable impact across its portfolio. By ‘impact’, we mean positive, long-term, transformative change for poor people, who are the intended beneficiaries of UK aid.’ Some of the findings of the report include:
- “[W]e find that the results agenda has helped to bring greater discipline in the measurement of results and greater accountability for the delivery of UK aid – both key objectives. These achievements have, however, involved some important trade-offs. Some of DFID’s tools and processes have had the unintended effect of focussing attention on quantity of results over quality – that is, on short-term, measurable achievements, rather than long-term, sustainable impact.”
- “DFID is, indeed, very influential on development policy…..[i]n recent years, it has been a forceful advocate for the results agenda. We hear from many partners, however, that DFID’s single-minded promotion of better results management has come at the expense of thought leadership on long-term development impact.”
- The DFID Results Framework “conveys the scale of activity rather than the quality of interventions and their impact on people’s lives. We are also concerned that it sends an inaccurate message to both implementing and country partners that DFID cares more about maximising outputs and beneficiary numbers, than achieving transformative change”.
- “Invest in long-term impact rather than short-term results: With average programme length of just three years, transformational impact will often be possible only over several programme cycles. This should be recognised explicitly in programme design. For complex objectives, this means gradually putting in place the building blocks for lasting impact, including the right policies, priorities, institutions and capacities in steps appropriate to the context. While DFID does this well in many instances, its results management processes do not necessarily encourage it.”
- “ A flexible approach to delivery: Good, impactful programming is flexible in pursuit of its goals. Programme designs are ‘best guesses’ as to how to solve complex problems in dynamic environments. They are most effective when they take a problem-solving approach, learning as they go. ….[w]hile annual reviews should provide such a mechanism, they tend to concentrate more on holding the implementer to account for efficient delivery, than on learning how best to achieve long-term impact.
In a nutshell ‘The key message of this report is that it is time to take the results agenda to the next level – to ensure that it focuses not just on the cost-efficient delivery of UK aid but also on achieving genuine and lasting impact for the world’s poor.’ The report goes on to suggest amongst other things that DFID should “ensure that the incentives of staff and implementers encourage an investigative, problem-solving approach to programme implementation and a willingness to adjust programmes as necessary in response to lessons learned or changing conditions”.
In our new book The Politics of Evidence and Results in International Development: Playing the Game to Change the Rules? which is published today, we explore many of these issues and some of the more systemic roots of how, and why, the results agenda can distort attempts to promote transformational change. In particular we look at some specific examples and strategies of how development practitioners and managers have navigated, resisted and used the ‘results agenda’ for more transformational ends. We believe these experiences provide some clues to how organisations which truly seek to promote sustainable impacts might create a more enabling environment for their staff and partners to do so.
Many of the cases covered in the book noted the key role of individuals’ personal power to question, create and act. It was individuals that adjusted, revised or ignored their particular set of planning protocols, results frameworks, and reporting requirements to create more workable options. Understanding why and how they did this provides examples of not only how these processes might need to be adapted, but what space for human agency and discretion is required if programs are to be flexible and adaptive.
This in turn required people being astute political actors who understood their context, organisational histories and how to leverage organisational values.Those organisations that seem to have navigated well the narrow path between the pressure to meet reporting requirements and continuing to pursue transformational goals drew upon these understandings to resist where necessary, adapt where possible and invent new ways of doing things where feasible. This required a clear identification of working with what is positive about the results and evidence agenda, using certain protocols or methods to advance social justice ideals and being self-critical about what is worth doing, or not.
Another strategy, though less common, was that of helping front line staff and partners to speak up and be heard directly by those making decisions about strategies, priorities and resources. This collapsing of hierarchy seems to have created a space where the authenticity, passion and sheer common-sense of those steeped in local context can be communicated to those with the power to make decisions. These exchanges needed safe spaces in which learning about what works, and does not, could take place, and ideas for influencing developed. Such processes can then lead to collective action to respond to the agenda, although the authors are cautious about how effective that strategy has been for them.
The final strategy which emerged is about being politically opportunistic about fashionable results-related concepts that can provide opportunities to help challenge the power relations that cause inequality, rather than reinforce them. The book documents cases where the current interest in ‘Value for money’, ‘theories of change’, and ‘social accountability’, have all been ‘used’ in this way, and suggests the ‘thinking and working politically’ agenda offers further opportunities to do so.
Why Politics Matters
Our conclusions note that just as it is clear that ‘politics matters’ in the development process, so too do politics, power, interests and ideas define organisational policies, procedures and behaviours. As another recent publication suggests in respect of indicators, monitoring and evaluation processes smuggle theories of development, human nature and governance into apparently neutral systems of measurement. It is important that this is recognised and understood.
We would argue that understanding and shifting the power dynamics that shape who decides what gets measured, how and why, is a key step to ‘taking the results agenda to the next level’ and helping to get to grips with the evaluation of politics, and the politics of evaluation and how in practice this might be done.
This blog was also posted on the Practice for Change website of the Institute for Human Security and Social Change
Due to be published by Practical Action in July 2015, this book is the outcome of an international conference for people who wanted their work in development to support local, national and global efforts to transform power relations for greater social justice and the realization of rights – and this within a political context of ever-increasing emphasis on certain kinds of tangible results and evidence. Conference participants were invited to share experiences and strategize about their effort and most shared rich, varied and frank cases, a small selection of which have been developed into chapters for the present book.
Chapter 1 Introduction by Rosalind Eyben and Irene Guijt.
This chapter introduces the concerns that have guided the Big Push Forward and its culminating conference about the politics of evidence from which originated the case studies in the present book. The book’s principal themes that emerge from the case studies are identified and the chapter concludes with a summary outline of its contributing chapters. read more…
This website archives the resources, debates and 60 separate blogs of the international Big Push Forward Initiative (2011-2013) that culminated with a conference bringing together from all over the world 100 development professionals and scholar activists. The conference provided an opportunity to share and strategize for people working on transformative development, and who are trying to reconcile their understanding of messy, unpredictable and risky pathways of societal transformation with bureaucracy driven protocols. We distinguished between the big ‘E’ (evidence of what works or not) and small ‘e’ (evidence about performance) and the interaction between these.
One of the outputs from the Conference is a book to be published by Practical Action in 2015 The Politics of Evidence in International Development: Playing the Game to Change the Rules? edited by Chris Roche, Rosalind Eyben and Irene Guijt and . The book brings together and further develops papers and case studies selected from among those presented and discussed at the Conference.
Our book will explore the history of the ‘results’ agenda and the consequences for development practice. In doing so it attempts to explore the external and internal political factors and drivers of the push for evidence, results and value for money in development agencies. Using a range of conceptual, theoretical and practical material an analysis of strategies that enable more transformative approaches to results and evidence within the sector is presented.
Provisional content and titles
& Chris Roche Introduction
Rosalind Eyben Uncovering the Politics of Evidence and Results’
Brendan Whitty Mapping the Results Landscape
Cathy Shutt The Politics and Practice of Value for Money
Chris Roche Juggling Multiple Accountability Disorder
Janet Vähämäki The results agenda in development cooperation
Vicky Johnson Valuing Children’s Knowledge: Is Anybody Listening?
Ola Abu Al Ghaib Aid Bureaucracy and Support for Disabled Peoples’ Organizations
& EberhardGohl Unwritten Reports: An NGO Collective’s Lessons for Reporting
Marjan van Es
& Irene Guijt Theory of Change as next ‘Best Practice’? The Experience of Hivos
Irene Guijt Conclusion: Strategies, Gaps and Potential
On April 23 and 24, 2013, one hundred development professionals debated ‘the politics of evidence’, the report of which is now available (BPF PoE conference report). The conference provided an opportunity to share and strategize for people working on transformative development, and who are trying to reconcile their understanding of messy, unpredictable and risky pathways of societal transformation with bureaucracy driven protocols. We distinguished between the big ‘E’ (evidence of what works or not) and small ‘e’ (evidence about performance) and the interaction between these.
Participants discussed four questions, using their own cases of generating and using evidence:
- What is ‘the politics of evidence’ – factors, actors, artefacts? And why is it important?
- What are the effects on transformative intentions and impacts of potentially useful approaches on evidence of and for change?
- Under what conditions do these practices retain their utility rather than undermine transformational development efforts? What factors and relationships drive the less useful practices and protocols?
- How are people strategizing to make the most of what the results and evidence agendas have to offer transformational development?
Participants noted the positive effect of encouraging more critical reflection in planning and programming. Negative effects included the questionable ethics of certain demands, the unclear utility of some artefacts, wasted resources, and ‘sausage numbers’.
Participants shared strategies for reducing the perverse effects of evidence artefacts and for enhancing their use for more transformative effect. Recognising one’s own power to make a difference, through either resistance or creative compliance, was considered a critical first step. Understanding the contexts that generate the promotion and use of evidence artefacts helps influence their effective use and critical reflection. Building collaborative relationships and stronger organisational capacities to engage meaningfully with evidence and results artefacts were also areas where participants had usefully invested efforts.
More evidence is needed about the ‘politics of evidence’, in particular how it is being experienced by grassroots workers and mid-level government staff. More examples about the utility of certain artefacts are also needed, as are ways to hold organisations to account about the utility and relevance of required protocols.
We have been silent for much too long, but it doesn’t mean there is nothing happening! I have spent the British summer shivering and trying to avoid catching colds in Australia. My (laboured) efforts to write a chapter for the Big Forward Politics of Evidence (POE) book have been enriched by participation in a number of conversations with NGO and AusAID staff. In late July I joined a well-attended and lively Politics of Evidence type workshop in Melbourne supported by ACFID, the Australian NGO network, the Australian National University and La Trobe. (I am going to leave it to one of the fellow convenors Irene Guijt, Chris Roche, Gillian Fletcher, Patrick Kilby or Megan Cooper to report!)
During my visit, I have also participated in several discussions about what lessons the results and value for money in UK might have for practitioners here. It’s difficult to tell with an imminent election and I don’t want to get bogged down in the details of conversations, but I am struck by how much donor (huge turn out from AusAID staff at one event) and NGO staff seem to appreciate having a more political analysis of the value for money agenda informed by the crowd sourcing study and experiences shared at the UK conference. Several, whom I think must have been expecting a dull methodological discussion have enjoyed honest sharing about how confusing we (official aid agencies, NGOs and consultants) initially found the agenda in the UK and the contingent effects it has had on different actors situated in different parts of the aid organisations and relationships. It’s great to be able to share examples that get away from a simple donor recipient dichotomy and provide a more nuanced view of the efforts people in different locations are making to enhance the positive effects of the results and evidence agenda and mitigate risks. I have certainly learned a great deal and sharpened my thinking about the complementarities and potential tensions between the Big E drivers of VFM measurement (economist policy makers) and the small E results (management accountants) that I hope to be able to unpack in the book chapter (groan).
The Australian community are starting their VfM journey a bit later than we did in the UK and therefore enjoy the benefit of learning from some of our ‘mistakes’. There are lots of innovative approaches being developed to help NGOs enhance and demonstrate value for money in ways that are consistent with BPF values. These include enabling citizens to define what is valued and rate the relative efficiency of different NGOs projects as a means to enhance NGO accountability to the people they work with. I won’t say more as I don’t want to steal the thunder of those involved more directly. Hopefully they will soon be able to share experiences here soon. My reason for posting a very short blog is not only to distract myself from the pain of trying to write something coherent, but to let others know how useful folks here are finding our sharing of UK and European experience. I tend to be a bit sceptical about the value of such exercises, but on this occasion it seems to have been particularly worthwhile. Concrete examples such as Christian Aid’s value based value for money framework seem to have really resonated with people here.
This is going to be short as I am sure many are enjoying well-deserved holidays, but I guess the message is that there is merit in trying to keep the BPF POE conversation going. Only this morning I received an email from Gillian Fletcher asking whether it might be possible to do a POE gig in Burma where practitioners are struggling with results and evidence issues. What a great idea!
At the Politics of Evidence conference it was clear that for many practitioners the problem with inappropriately imposed results frameworks or approaches was at least as much to do with poor internal dialogue and power relations within their own organisations, as it was with external agencies insisting upon them. This is something we have explored before in terms of how the politico-managerial environment plays out in development agencies.
Whilst the conference included many examples of how staff in agencies acted successfully as brokers and intermediaries between donors and partners, there were also examples of where this was not the case. This post focuses on some of the discussion around these examples.
Four propositions about the ‘squeezed middle’
In part, some people felt that problems were linked to a poor understanding by what was called ‘the squeezed middle’ of the realities and contexts of the programs which they were managing, or ‘representing’. The squeezed middle being those staff in middle management positions who were often the interlocutors between ‘front line staff’ or partners working on the ground, and senior managers or donors.
Secondly was discussed whether, compared with field staff or partners, incentives were greater for the ‘squeezed middle’ to respond to the demands of senior managers and donors. Combined with low knowledge of the context and programs they were responsible for, this tended to a reluctance or inability to ‘push back’ on inappropriate demands.
Thirdly, it was noted there has been a recent tendency to recruit middle managers who either may lack long term field experience of working on transformational development processes, and/or may have skills or experience from forms of project management that tend towards the contractual and linear. Thus they might be happy to insist upon what others might consider inappropriate approaches because they believed them to be the ‘right thing to do’, not because they were told to do so.
Finally there was a view that the competition between agencies and the lack of strategic collaboration between them means there is little solidarity and collective effort designed to resist inappropriate demands. More prosaically it was suggested that this was compounded by busy staff not having the time or the space – or indeed incentive – to pursue such collaboration. If these four propositions have some validity it is perhaps not surprising that compliance with certain forms of results based management was seen as having as much to do with internal organisational dynamics as it does with a simple form of power relationship between ‘donor’ and ‘recipient’.
This is also why examples of how such a situation might be disrupted are of particular interest. One powerful illustration of this was presented at the conference.
Front-line persuades Minister to change results framework
This case explored a project with sex workers, funded by a bilateral agency. This agency in order to placate domestic interests decided to approach two national organisations to support the project – one was an HIV & AIDS activist organisation and the other was a religious development organisation. One of the main ‘success indicators’ was defined as in effect ‘stopping sex workers being sex workers’. Two years after the project started the nine partner organisations were brought together in the country of the funding agency, to meet each other but also to meet with domestic organisations working with sex workers. Despite the organisations being very diverse, spanning nuns working in Bolivia to male sex workers in Macedonia, all agreed that the idea of attempting to stop sex workers working as sex workers did not make any sense, and that what was needed was to support them to be safer in their work.
At the end of the conference the group went to the capital city and got an audience with the Minister whose agency was funding the work. Collectively the group told the Minister that they would be happy to continue spending his government’s money as long as he would agree on different indicators of success, and chosen with respect to their local understandings of change, and what the groups they worked with wanted. Faced with this unanimity of diverse groups speaking directly from their experience the Minister was persuaded to agree with their proposition.
In this case an insistence by those working on the front line to represent their concerns collectively, and directly with the politician responsible led to a heightened understanding of the realities and complexities of the context. Followed by a revision of the results framework in ways that the organisations involved in this project felt comfortable with. This certainly got me thinking that making more opportunities for collapsing hierarchy and letting front line staff and partners represent and speak for themselves is critically important.
During the convenors’ reflection on the recent Politics of Evidence conference we wondered whether more nuanced power analysis might help us break out of unhelpful linear aid chain mentalities related to results and evidence. The case studies presented at the conference suggest that if we are to make the results and evidence agenda more supportive of transformational social change we need to move away from the idea that the politics of evidence is all about visible power. Images of monolithic all-powerful donors placing unreasonable evidence and results demands on well-intentioned, powerless recipients are not very helpful. Many of the experiences shared suggest we need to get more adept at identifying how hidden and invisible power influence the use of results and evidence artefacts in different contexts by individuals from different cultural backgrounds and who possess varying capacity and confidence. Such an understanding might enable us to develop more politically savvy strategies and tactics to harness useful aspects of the results and evidence agenda whilst mitigating the risks of it being used in ways that could contradict transformational development aims.
Conference convenors were delighted that on the 23rd and 24th April we were able to bring together so many thoughtful and engaged development professionals. They came from across the globe, including those working on the ground, in head offices, in consultancies and research institutes.
The Politics of Evidence conference provided an opportunity to share and strategise for people working on transformative development, and who are trying to reconcile their understanding of messy, unpredictable and risky pathways of societal transformation with bureaucracy driven protocols. They have struggled to make sense of the shifting sands of the results agenda – seeing the wisdom in some aspects while actively questioning its less useful, sometimes damaging, manifestations and consequences.
We designed it to make the most of participants’ experiences and ideas and everyone had the chance to share these in the conference break out groups, including documented case studies from about a third of the participants. As Lawrence Haddad comments in his blog yesterday on the conference, power pervaded these stories. We hope that their engagement in such an interactive conference process will have given participants courage and confidence to adopt and develop further the potential strategies and tactics (developed in the break out groups and shared in the final plenary session) to make possible programming and evaluative practice fitting for transformative development.
Over the next month or so – while the conference report is being finalised – the convenors will be blogging about some of the key issues and challenges that the conference threw into relief. Then, we plan to start work on a book that will explore these issues further, including contributions from some of the conference participants.
The Big Push Forward convenors aimed to throw a stone into a pond to make ripples. We hope these ripples will continue to expand outwards. Meanwhile, by September the current group of convenors will be stepping down in the hope that others come along to throw in more stones – either as the BPF or in some other form. Contact us if you are interested!
Day One of the Big Push Forward’s Conference on the Politics of Evidence reflected, I suppose, much of what might have been expected from it. There was a great deal of pushing back and forth about ideas and philosophies, rich discussions, a soupcon of frustration, some positivity and a lot of interest in taking some of the ideas into the second day to talk about strategies.
Since a blog cannot hope to convey the discussion, I’ll restrain myself to some threads which I felt stood out:
- Measurement processes, tools, artefacts, can all be positive or negative: it’s not about the artefact – although discourses can grow around them, and some discourses can push in one direction or another – but about the interpretation of the artefact. It’s about the detail of their implementation, and the people and the relationships involved in bringing them in and communicating them.
- Agency, tai-chi and ju-jitsu: people reflecting on their own positions are not just automatons within a relentless machine. There is agency, and there are possibilities to shape the directions of organisations and the way organisations – or the people they work within – understand the world through measurement and evaluation processes. It’s just that sometimes a little tai-chi – or possibly ju-jitsu – is needed to turn people around.
- Disjunctures in scale: that some of the measurement techniques, when used to evaluate interventions and to convince at the level of general policies, do not necessarily work at the level of individual projects. RCTs, for example, are purpose designed for scale-up, but that may not be the case for many of the project evaluations at a local scale.
- Ownership: fundamentally, one of the biggest concerns articulated was about ownership of the evaluations, and who are they for. It’s about programme staff whose projects become strangers to them, or those we are seeking to support who felt themselves robbed of voice in the face of evaluations, experimental design, and the power of evidence.
- There is no Big Bad Wolf proposing mindless tools to do people down: there are repeated, deep, systemic issues in play, coming from a fragmented and highly political environment, dealing with difficult problems. Everyone in the room had their own philosophies and their own ways of pursuing development aims within that system.
These are of course just some personal reflections and take-aways. Tomorrow, the sessions are focusing on whether we can come up with strategies for opening the space for fair assessments for development in this complex system.
The Big Push Forward convenors welcome our hundred and ten participants and student volunteers. For all those unable to attend, this evening, Brendan Whitty will be blogging about that day’s highlights and don’t forget tomorrow we are streaming live our final session.
Two papers have been prepared for the Conference and these are now on line. Rosalind Eyben’s Uncovering the Politics of Evidence and Results disentangles the historical threads and origins of results-based management and evidence-based policy/programming discourses. She discovers a strong ‘family resemblance’. Both assume that evidence pertains largely to verifiable and quantifiable facts, and that other types of knowledge have less or little value; both have a particular understanding of causality, efficiency and accountability. The paper looks at how and why these discourses have entered and influenced the development sector and who is promoting them in which contexts What has been the effect on the sector’s priorities and practices, and particularly its capacity to support transformative development?
Arguing the importance of being critically aware of how power sustains and reinforces the results-and-evidence discourses, Rosalind examines how these discourses generate artefacts (tools and protocols) such as log frames and theories of change that shape our working practices. When hierarchical ways of working block communications and dialogue, the artefacts trigger perverse consequences but their power is neither uniform nor constant. Analysing the politics of accountability and the sector’s internal dynamics, Rosalind suggests there is room for manoeuvre to expand and enable more transformative approaches to results and evidence within the sector.
Brendan Whitty’s paper, Experiences of the Results Agenda, paper analyses the data from an online survey, which invited visitors to the Big Push Forward website to give their perceptions of the impact of the results agenda on their working lives. Brendan analyses the very different experiences and interpretations of the respondents as revealed through both 153 responses to the quantitative survey and 109 qualitative stories. The study discusses the day-to-day practice of small-e evidence –results and targets in management of specific projects – rather than large-E evidence of establishing broader development policies. The stories are about the nuts and bolts of the development processes and artefacts – the theories of change, results frameworks, reporting requirements and value for money rubrics. It is about what ‘e’ is being collected, how it is used, and to what effect.
Respondents disagreed about the effects of these artefacts. The contradictory perceptions seem to be often in tension. Thus learning is often seen to be(?) in tension with accountability; capturing the complexity in evaluation with harmonisation and reductionism; coordination of partners with constraining their freedom to adapt. How these tensions are resolved and the perceptions play out seems to be dependent on how the artefact is communicated, managed and tailored to its context. The fit appears to be important: the fit of the artefact to the existing systems and capacity of the organisation, and also the fit of the artefact to the specifics of the intervention (e.g. its complexity, the number of partners). Finally, perceptions of an artefact seem to be affected both by staff’s’s own circumstances and their relationship with others. The survey data suggests that those in M&E and management roles, who benefit from better data and more resources for their priorities, tended to be more positive than those in project implementation and mid-level roles.
During the conference we will be exploring these ideas and testing these intepretations. Come back tomorrow for the deliberations of Day 1,