Hercules, VfM and moderation in use of evidence
In his classic article back in 1959, medical Charles Lindblom describes the technical ideal of evidence-based policy use:
“[A policy-maker] might start by trying to list all related values in order of importance … Then all possible policy outcomes could be rated as more or less efficient in attaining a maximum of these values. This would of course require a prodigious inquiry into values held by members of society and an equally prodigious set of calculations on how much of each value is equal to how much of each other value. He could then proceed to outline all possible policy alternatives. In a third step, sovaldi sale he would undertake systematic comparison of his multitude of alternatives to determine which attains the greatest amount of values.” Public Administration Review, ambulance Vol. 19, No. 2. (Spring, 1959), p 79.
The conclusion that Lindblom comes to is that this approach is ‘of course’ impossible. It “assumes intellectual capacities and sources of information that [people] simply do not possess”.
One possible interpretation of ‘value for money’ management processes (VfM) resembles Lindblom’s task. Such an interpretation understands VfM as requiring the monetization of an intervention, and thus the comparison of different interventions using a common denominator. In a guidance note on VfM and governance indicators, DfID rejects this approach. It emphasises the need for a common sense approach which does not ask impossible questions like ‘what is the value for money of an election?’ This moderate approach has been replicated as considerable ingenuity is being dedicated to develop methods for capturing value for money (see, e.g. BOND, ICAI, ITAD).
However, recent evidence sessions with the Public Accounts Committee evidence sessions, suggests that its expectations of DfID’s VfM process may be closer to Lindblom’s Herculean process:
“The Department must be able to demonstrate unequivocally that it allocates resources on the basis of value for money… The Department should develop clear and auditable mechanisms which ensure that staff in both Headquarters and country offices have value for money criteria at the heart of their decision making, and that they reallocate funding to the best possible alternative when projects are delivering weaker value for money than expected.” (At p. 6, para 7.)
The requirement for unequivocal demonstration of value for money suggests that DfID must show how it has analysed different programming choices against others when making allocation decisions. It hints at a quiet, underwater, trial of strength between DfID and its domestic stakeholders for the definition of the VfM agenda, with the PAC seeking to lever open the black box of DfID’s decisions and subject them to technical review and audit.
The process of making resource allocation decisions between countries constitutes a particularly interesting arena for this struggle. Consider the PAC’s demand in the evidence session that DfID assign a number to the fiduciary risk of giving aid per country – a demand which DfID suggested (rightly, in my view) that this approach didn’t make a lot of sense, since risk depends on the recipient institution rather than the country. However, interpreted as a desire to have DfID articulate technical and auditable criteria for country allocations, it makes a great deal more sense. Such an interpretation may cast light on other data requirements: the demand for comparable sector-level data across countries and the Bilateral Aid Review’s requirement for evidence on security for each country. All constitute comparable indicators that might instruct resource-allocation decisions.
However, as Lindblom observes, there is a limit to the capacity of evidence-based policy-making. Some decisions cannot easily be boiled down to technical criteria and pressure to the contrary carries with it considerable challenges, as has been illustrated by the recent Africa All-Party Parliamentary Group’s report on DfID’s Bilateral Aid Review:
“Our concerns relate to the lack of objective criteria used to select focus countries… The Needs-Effectiveness Index (NEI) appears to have been used to justify the subjective decisions of officials, rather than to make objective decisions. DFID did not provide a clear and convincing explanation of why its Needs-Effectiveness Index was constructed in the way it was…”
While the AAPPG disputed the grounds DfID chose (resulting in closure of the Burundi office, amongst others), in fact any assumptions would have been open to challenge. An insistence on common indicators, representing complex ideas of value and impact, will lead to reductionism. This risks submerging the true decisions somewhere in the generation of the numbers and the assumptions made to do so. Because these reasons are hidden, they are less susceptible to challenge and scrutiny, and therefore may, paradoxically, be less accountable. The consensus is increasingly that it is important not to throw the evidence baby out with the reductionist bathwater: that the limitations of evidence do not mean that evidence should not be collected and used. Equally, however, moderation is required so that processes of evidence collection inform rather than replace decisions. Resource allocation on the basis of reductionist indicators is unlikely to give of its promise, either in terms of accountability or effectiveness.
Comments are closed.