Skip to content

Perverse effects of results measurement – a Swedish debate

2011 May 11

Since becoming Minister for International Development in 2006 Gunilla Carlsson (a former accountant) has made ‘results’ the key theme of the restructuring of how Sweden does aid. She has played a leading role in forming a like-minded group of development ministers that include the UK, thumb Canada, remedy Germany and Denmark.  In Sweden’s development practice community there is a much unhappiness about how the unintended effects and perverse consequences of how this agenda is being pursued but until now there has been little open debate about the matter.

However, on the 10th May, Göran Holmquist from the Nordic Africa Institute organised a joint event with Sweden’s Development Forum at which I had been invited to speak.  Other speakers were Tore Ellingsen (a behavioural economist) on motivational crowding out; Daniel Tarchys (who spoke about the experience from the former Soviet Union of the perverse effects of target setting and controlling through numbers that New Public Management started to promote just as the USSR collapsed) and Jan Bjerninger recently retired from managing Sida country programmes and spoke to former USAID Director Natsios’ theme of ‘obsessive measurement disorder’.   Just as at the Big Push Back event last September in the UK  the organisers a large number of people wanted to come and without space for more than 100 or so participants, the NAI decided to do a webcast for all those turned away from the meeting itself.

 Four discussants responded to the main speakers, including the Minister’s most senior civil servant, Johan Borgstam, Director General for International Development Co-operation who listened hard, defended the Minister’s agenda robustly and stayed the whole three hours.   From the floor, a former Sida staff member said he had noticed that with the greater control being exerted from the centre country programme officers were becoming risk averse and not responding to context specific opportunities.  Borgstam rejected this argument: it was because development cooperation had taken too many risks in the past that it was now necessary to tighten things up and make sure money was not wasted. Others responded by proposing that past ineffectiveness was because development agencies had been insufficiently reflective – in terms of double loop learning –  about what they were doing and why they were doing it. The results agenda is framed in terms of greater transparency and accountability but we need to make sure this does not further discourage organisational learning and reflexive practice (one of the Big Push Forward themes).

Accountability and whose results count was another hot topic. Is increased accountability and transparency to Swedish taxpayers undermining accountability to aid recipients?  What has happened to the Paris Declaration principle of country ownership? (I learnt incidentally that Sweden is no longer enthusiastic about general budget support).  Borgstam said the aim was to produce an English language version of http://www.openaid.se/ and that he hoped a journalist in an aid recipient country could then challenge the Ministry about how Swedish money was being spent.  However, discussants and members of the audience continued to worry about who was driving the agenda, Swedish officials or the people that justify aid’s existence.  A member of the audience commented that over the years he had worked in Sida country offices, he noticed that less and less time was spent in dialogue with local partners and more and more time responding to head office demands (bringing to my mind the ‘urban myth’ about USAID country offices where the clocks on staff members’ PCs are fixed at Washington time).

Holmquist summed up with the following points:

  • Aid practice needs to be more modest.  Everything cannot be known in the messy context in which aid operates;
  • We need to be more cautious with the results agenda because of the well-known perverse consequences of pre-set objectives and excessive reporting requirements;
  • Good practice requires investment in learning rather than in controlling;
  • The results agenda risks undermining the idealist commitment and professional pride of staff.

Watch the full debate

3 Responses
  1. May 17, 2011

    One problem is that good results indicators require a lotof thinking.
    As an example. In an offer to results monitor the private sector debelopment in the Niassa province in Mozambique.
    The consultant thought that the only problem with a micro credit programme was that the target was too high, 3000 credits. Completely useless indicator. Repyament record migh be slightly better. Instead information to be provided on the impact of the credit program should be developed.
    One would like to know if there has been an income in family income.
    A good example in research co-operation could be the number of peer reviewed articles and citation indices.

    We need better and less indicators.

  2. Laurie Adams permalink
    May 11, 2011

    Thank you for this Ros – very timely as I’m speaking tonight at University of Johannesburg on ‘new and old trends in aid evaluation’ and need seem inspiration as it feels like there are few new trends – just that ‘going against the flow’ (of mainstream demands for logframes, quantifiable targets etc) is getting harder not easier as the current gets stronger!

    I suddenly wondered, reading your post, why is all the pressure on targets, indicators? Most of us agree we need more evidence of our contribution to change – qualitative and quantitative. But why can’t this be achieved through much higher quality evaluations, that not only do double but triple loop learning, checking outputs, outcomes, impact – effectiveness, results, test theory of change? As long as the starting point and what we are trying to achieve why is clear (which does not necessarily require indicators and definately doesn’t require targets) Why is all the focus at the planning stage when we all know they create perverse incentives (recent example from our work is women owning land kilometres away that they could get to only once a week, rather than having access to or power over communal land nearby due to target of ownership).

    We have found that our peer reviews (where staff and board members from other members of federation visit and review) give much more useful input to future direction than evaluations, that evaluations rarely tell us anything we dont already know, and it feels we are now doing them primarily for credibility purposes. Could we do more on peer reviews across oganisations, joint reviews and evaluations, to understand and show our impact?

Trackbacks and Pingbacks

  1. Obsessive Measurement Disorder

Comments are closed.