Skip to content

Crowd-sourcing your experiences about the Results Agenda

2012 October 31

How are you experiencing the results agenda? Is it helping to bring more focus to choices or more rigour to planning and priorities? Or is it forcing protocols that gobble up time or reduce space for innovation? Has it had other effects, or indeed none at all? This blog launches our survey to understand how people are experiencing the results agenda, in the lead up to the conference on the politics of evidence.  We need to know more, as a sector, about what is happening, in order to have grounded and constructive debates.

We invite you to visit our survey link, and give us your experiences, positive and negative! The survey invites stories, shorter fragments or snippets of experiences, and poll responses. All information will be confidential, all questions are optional, and nothing will be shared unless you agree. To subscribe to the Big Push Forward or to give support, please see the sidebar.

Everyone agrees that the development sector must use the public and charitable funds it receives as effectively as possible for the benefit of people living in poverty. The question is whether the reforms of the results agenda have enabled this, or hindered it, or a bit of both? (By the results agenda, we mean here the intensification of institutional practices requiring the use of evaluations and evidence on results for the planning and management of aid, and a parallel tightening of what constitutes valid evidence). The survey launched today seeks to collect some of the sector’s experiences on this.

On this, there is considerable discussion with the sector. Owen Barder – a persuasive champion of the results agenda – articulates the two positions elegantly in this post. On one side of the scales, he summarises a series of worries around the intensification of management for results: it may undermine partnerships, overload staff, create perverse incentives and shift development practice to the measurable, while still producing inadequate information for management. On the other side of the scales, results-based management is seen to be vital for effective management of aid, particularly in complex environments; for accountability to taxpayers; for selling development politically to a financially squeezed north; and – critically – to close the feedback loop enabling fostering of better projects. Only through these management systems will the sector fulfil its duty to the communities it claims to support.

No-one claims the positions fall into a simple dichotomy, that being results-oriented is completely good or bad, or represents a monolithic single approach. The issues and divides run deeper, dealing with ethics and power. They raise questions of where the power lies and should lie within the development sector:  how can the knowledge, perspectives and values of those that the sector seeks to be benefit be included, rather than marginalised in the search for ‘objectivity’ and aggregated, simple results? How do restrictions on valid evidence and the choice of ‘countables’ prioritise particular values, to the exclusion of others?

Which interpretation of the results agenda one favours seems to depend on the picture of development one carries mentally: is it transformational development, social, cultural and political change reducing the forces that perpetuate inequality and generate poverty? Or is it the delivery of services, whether vaccines, education, wells, or power, which produces measurable positive changes to the lives of individual people? The latter, clearly, is more susceptible to measurement but may not be sustainable after aid funds go and offers limited response to transforming underlying inequalities that perpetuate poverty.

There is a clear role for both interpretations of development. Both require appropriate measurement processes. The BPF believe that the former should not be ignored or excluded by a one-size-fits-all set of management processes that tacitly favour the latter. From a technical evaluation perspective, the innovative work on complexity by Elinor Ostrom, Ralph Stacey, David Snowden– reviewed by Harry Jones, Ben Ramalingan and others [pdf] – builds on a long line of management theory to argue that different forms of management processes are suitable in different environments.

The question is not whether the results agenda is good or bad per se, but politically what space we end up with for approaches that are ‘best fit for context’ rather than ‘best practice’ from one or the other perspective. The sector has reacted to the challenges of the results agenda by innovating. However, for all the insistence on evidence at project level, at this systemic ‘triple’ loop learning level, there seems to be little evidence in the public domain of how it is applied and its positive or negative effects on every day management and implementation practices and relationships (if you know of any such evidence, please do bring it up in the comments section). The question is about how we learn, know things, and establish which approaches, epistemologies, power distributions and management systems work best.

We know very little about this. We do not know in detail how different management processes have been adopted, nor their implications. Rosalind Eyben in an earlier post called the experiences ‘hidden transcripts’. We have experienced ourselves that, for different reasons, resistance and complaints as well as positive experiences have remained subterranean. Chris Roche therefore suggested that gathering a larger evidence base might be challenging: it is to this end that we are launching our crowd-sourcing survey, to generate evidence that can ground our debates more.