Skip to content

Time for a new, value driven politics of evidence?

2013 April 9

Speakers at IDS and ITAD’s recent launch of the Centre for Development Impact provoked interesting reflections related to evaluation and the politics of evidence. Bob Piccotto, ambulance a former Director General of the World Bank’s Independent Evaluation Group, nurse gave an inspiring keynote speech calling for a multi-disciplinary, look ‘beyond aid’ evaluation agenda driven by the desire to tackle inequality and contribute to social justice. I found myself wondering whether this could be considered a call for a more reflexive, political evaluation agenda with a new politics of evidence, and whether the development community is equipped to respond.

 Given shifts in the global political economy and increasing inequality worldwide, Piccotto maintains it is time for new evaluation strategies driven by values and concerns about equity and human well being. A partial and self serving development evaluation agenda determined by aid agencies’ concerns with demonstrating discrete projects increase GNI and achieve short term results is no longer fit for purpose. It is time for approaches informed by economics, psychology and sociology; new metrics of human well-being; and new units of account, e.g. networks. Evaluators are challenged to ‘get off the fence’ and make sure evaluation strategies are driven by evidence of inequality and concerns with improving equity.

 

Piccotto briefly rehearsed the pros and cons of randomized experimental approaches to producing evidence of ‘what works’ through establishing causality and attribution. He went on to argue there is consensus about their limitations. Aside from costs and ethics, there is no guarantee that evidence of an intervention working (or not working) in one place at a particular point in time has relevance there or anyplace else in the future. Evaluation designs are required that include qualitative approaches to aid understanding of how contextual factors affect the contributions of development interventions to development outcomes. They will also need to produce data that can be quantified. Much of the launch was spent considering if and how complexity science and systems thinking could improve methodologies as development evaluation strives to remain relevant in the new aid architecture.

 

Not everyone at the CDI launch, like those who followed the wonkwar on the politics of evidence debate, agreed with Piccotto’s assertion that methodological tussles are over. Citing some ‘back of the envelope’ analysis on the political economy of evaluation methodologies several participants commented on the huge investments by the World Bank and other influential donors in institutions like JPAL whose key aim is to build capacity for experimental evaluation designs. Examples provided by presenters resonate with perspectives I have heard from members of African evaluation societies:  incentives created through such training crowd out space for alternative approaches and methods. It is rumoured that there has been an increase in the commissioning of experimental designs some of which may be totally inappropriate for programme attributes and contexts to which they are applied. Those I spoke to were calling for a paradigm in which citizens in aid recipient countries can define their own evaluation paradigm and frameworks to enable their learning and social change initiatives.

 

Comments from African evaluation society members and presentations at the CDI launch suggest complex operations of power need to be addressed to allow a new politics of evidence driven by principles of equity. Evaluation questions and learning agendas are currently shaped by political imperatives of aid agencies that have to demonstrate they are contributing to short -term results. Some practitioners reported biases in the types of programmes that tend to be chosen for impact evaluation – those that are easy to measure. Econometrics has tended to dominate and often made assertions about its neutrality and objectivity that Richard Palmer-Jones argues do not hold up to scrutiny.

Although academics and practitioners at the conference are working to address these issues, all agreed evaluation is an interested business. Commissioners of evaluations and those supplying services have stakes in producing evaluations that respond to political demands that are linked with funding and existing relations of power within the aid system. The barriers to new actors entering ‘the evaluation market’ can be significant One donor pointed out that his agency’s evaluation procurement processes and scoring systems are biased in favour of mature, white male evaluators with significant evaluation experience; younger potentially able evaluators could never compete.

Biases related to assessments of evaluation methodologies also exist. Some participants cited evidence that the quality of evidence produced by evaluations with critical findings are likely to be subject to more scrutiny and thus are less likely to be published. A point made several times was that if evaluation is really going to contribute to a fairer, more equitable world the evaluation community would need to acknowledge its stakes in evaluation processes, learn to be more reflexive about these biases that reduce ‘objectivity’ and ‘speak truth to power.’   To me it suggested the need for a new kind of politics in evaluation that is more honest about the partial nature of knowledge produced by evaluation practice and strives to better reflect the stakes of the most marginalized citizens in society.

 

Comments are closed.