Skip to content

Has the focus on results damaged aid’s potential to support long term transformation?

2012 April 4

Owen Barder has written a typically crisp blog on the Results Agenda. He provides a useful and succinct delineation of  ‘Seven Worries about Focusing on Results’, recipe most of which the Big Push Forward along with many others, try have canvassed in the past year or so.

 

First, focusing on results may add to bureaucratic overload. Second, it may make aid less strategic and short-termist. Third, it may impose the wrong priorities. Fourth, it may ignore equity. Fifth, it may create perverse incentives. Sixth, it may inhibit partnership. Seventh, the results information is all bogus anyway as claims about results must rely on assumptions about the counterfactual which are usually flawed or incomplete.

It is useful that Owen takes these concerns seriously and attempts to look at practical ways to address and manage them through ideas about reducing bureaucracy, remaining strategic and being proportionate in the amount of evaluation work commissioned.

 

Reactions

The comments to the blog have also elicited some interesting discussion and exchanges. Sigrun gedal for example mentions  the key role of citizens in developing countries holding their governments to account, and suggests that this side of the equation often seems to get little air time in these debates. In a similar vein Dan Kyba notes the importance of democratic process as a means of passing judgement on performance.  Jeff underlines the important culture changes that are likely to be required if incentives are to truly change for the better.  Marcus Leroy  and Rick Davies write that there is a danger in ‘taking people for a ride’ through ‘false simplicities’ which suggest there is neat straightforward and quantifiable cause to effect relationship between aid inputs and results. Owen has responded to many of these points.

 

My thoughts

Owen notes that the seven concerns about the results agenda are about risks which have, so far, largely not materialized – according to what ‘several people in DFID’ have told him. I am not sure the ‘evidence’ he provides is up to his usually robust standards.  The Big Push Forward has certainly gathered a number of examples from practitioners about how this agenda is changing behaviour and practice and creating the perverse incentives that Owen suggests are a risk.   We also have a number of examples of people not feeling able to publicly share their concerns in case they are seen – in their own organisations, or by their donors – to be ‘anti-results’, or against ‘value for money’.   Also, we had an email from a regional director in another bilateral aid agency summarizing observations from staff in the agency’s country offices about what was unfortunately happening in DFID’s programmes because of the focus on results. ‘Unfortunately’, he emailed, ‘All this is nothing I can share on paper (all of it a bit sensitive as you may understand’.  All this is ironic given the recent emphasis on transparency.  This particular interlocutor made a strong case for robust evidence-based research on the impact of the results focus. What is clear that we need more research and frank debate in this area, and that donors should encourage this.

 

We also need to face the fact that it is not only the ability of certain processes to produce robust and reliable information that is important. It is also clear that the reality of the politics of ‘evidence’ and evaluation is very much at play in this debate. Certain evaluative tools and methods deliver better political products to Ministers and governments, than others. This will continue to be something that requires ongoing discussion, recognition and vigilance

 

As far as practical solutions the Big Push Forward would like to explore how citizens in both ‘developing’ and the ‘developed’ world might be better informed and better linked. A number of observers – including Owen – argue that the days of aid agencies and NGOs having a privileged and relatively monopolistic intermediary position between ‘taxpayers’ or individual ‘donors’ is disappearing fast. Kiva, Global Giving and other internet based agencies are providing  – or at least seem to provide – more direct connections, which are also arguably more transparent. As Owen suggests agencies “ must become a platform through which citizens can become involved directly in how their money is used”, but equally this needs to ensure an even greater involvement of those that this enterprise ultimately seeks to benefit.

 

Is it not time to be taking this to another level by creating deeper and broader two way ‘feedback’ loops (as Owen has argued himself in his post on what agencies can learn from evolution) by:

a) supporting local organisations and independent media to be telling their stories about aid and development effectiveness,

b) encouraging the incredible innovation that is emerging from the likes of Ushahidi and Twaweza (which Owen is board member of) to share information in all directions and to help visualize, crowdsource and aggregate this information and these stories so that the complex patterns and weave of the development tapestry become clearer,

c) really taking seriously the challenge of moving from a transactional to a transformational community engagement agenda in donor countries and helping to build the networks and coalitions between these actors and those promoting progressive change elsewhere, so that information on effectiveness and performance are more directly shared and debated.

 

This is not an alternative to more formal, rigorous and appropriate evaluation, but this kind of approach is an important complement to these types of valuation of the development process. This is because they provide more real time feedback, which is important for ongoing learning and adaptation, but also they would start to change the incentive structures faced by governmental and non-governmental aid agencies in important ways.

2 Responses
  1. April 18, 2012

    Hi Chris.
    Though I did comment about “false simplicities” the main thrust of my argument was that in the example cited (“the ONE campaign published an important summary of results which are expected from UK aid between now and 2015″) the results published used the wrong unit of analysis – e.g instead of total _number of people_ vaccinated, it should be _number of countries/governments_ where immunisation coverage is above x percent.

    By using countries as the unit of analysis the results agenda could become more aligned with some of the empowerment objectives referred to above (“the key role of citizens in developing countries holding their governments to account”

    regards, rick

Comments are closed.