Skip to content

Mixed Methods to the Rescue? Or Playing the Game to Change the Rules…

2013 April 4

The Politics of Evidence conference will be exploring how people are engaging with problematic practices and protocols, cialis and what alternatives they have found to create spaces for approaches more aligned with transformational development, viagra and which serve their learning purposes. I believe we can learn a lot about this from the experience of ‘front-line’ practitioners who are often subtly playing the game to change the rules.

It would seem that most people believe that ‘mixed methods’, amongst other things, are essential in order to make sensible judgements about the effectiveness of development interventions. This would include agencies such as the World Bank, as well as evaluation specialists. DFID has recently commissioned an important study entitled ‘Broadening the Range of Designs and Methods for Impact Evaluations, which sought to ‘establish and promote a credible and robust expanded set of designs and methods that are suitable to assess the impact of complex development programs’.

Now whilst there is still a great deal of debate about whether there is really a commitment to mixed methods in practice, there are also a number of challenges with implementing ‘mixed methods’ approaches. This post focuses on these issues.

In 2010 the Developmental Leadership Program brought together a number of agencies and programs included the Asia Foundation, the Oxfam International Youth Partnership, Leadership PNG, the Pacific Leadership Programs, as well as a number of staff from AusAID departments. The aim of this collaboration was to explore how best to monitor and evaluate programs that sought to strengthen local individual and collective leadership and associated reform. A series of workshops over 18 months promoted an exchange of experiences about attempts to develop approaches that were tailored to the needs and contexts of these programs, and which sought to develop an appropriate mix of methods and approaches.

In 2011 as part of this process the Developmental Leadership Program assembled a range of evaluations specialists from across the methodological spectrum. This resulted in a paper which I co-authored with Linda Kelly which described the different approaches discussed. We also suggested that whilst evaluators might be largely concerned about the merits and limitations of different methods, managers and decision-makers in development agencies also assessed methods on the basis of whether they met the information demands from their political masters.

It was with this understanding that we then explored with the agencies and programs involved the practical challenges they faced in terms of juggling these methodological and political demands with programs that are necessarily engaged in the messy realm of ‘working politically’, and what strategies they used in doing so.

Some of the conclusions we drew from the experiences of these agencies were:

  • Having a theory of change that can identify the complexity of the operating environment and provide a good rationale for program strategy was helpful. This involved developing an understanding of what and who drives change based upon an analysis of political and social relations and processes, including the role of influential stakeholders and the relationships between them. These were not prescriptive theories of action.
  •  While all the organisations that participated are working towards long-term, substantial social change, they were also able to identify short- and medium-term change: often in relationships and levels of trust between individuals and organisations. These changes clearly do not tell the whole story, but systematic collection of data about them was seen to be important as this could provide a useful basis for understanding the causes and processes of longer-term change, and might at the same time provide some information to keep donors and their political masters happy. However the precise nature of these changes were not predictable in advance.
  •  The participating organisations adopted a mix of methods and strategies to infer their contribution to change within the complex political processes. These included: direct observation; asking participants and observers; using databases and software to assist analysis and pattern detection; supporting organisational and coalition capacity self-assessments; undertaking creative comparisons of the costs and benefits with different ways of working; using social network analysis; and developing case study narratives. An important element in many of these processes were different attempts to elicit feedback from allies and partners, using formal and informal methods, about the quality of the relationship that had been established with them. Many of these approaches are consistent with rigorous ways of assessing causation.
  •  Building from current ‘practice knowledge’ to identify several features that are associated with effective social change processes, and to test for their presence. Some of these included: having a solid basis of political and social analysis of the context; investment in the emergence of local developmental leaders and coalitions who are able to act for change; having the flexibility and capacity to act quickly when critical junctures or opportunities arise; supporting locally led processes and development solutions; and working for change over the longer term.
  •  The ability to quickly utilise opportunities for change, particularly when there are ’tipping points’ or ‘critical junctures’ requires monitoring that supports nimbleness and agility.  At the same time it is important that M&E continues to collect information in a systematic way. Creating space for regular reflection on the changing context and program-wide analysis seems to have been a critical feature in enabling this balance to be achieved.
  •  Most of these programs are seeking to understand their ‘contribution’ to the broader changes associated with their interventions, as much as more directly attributable outcomes. This usually involves verifying their theory or hypothesis about how change happens, which includes an analysis of other influencing factors, and thus is as much about seeking to reduce uncertainty about the contribution being made as it is about ‘proving’ impact.
  •  Many of the programs undertaken by the organisations represented at the DLP workshops are complex, ‘messy’ and difficult to communicate to stakeholders and external audiences. These programs are  rarely able to present short, sharp, quantifiable outcomes, and do not wish to ‘claim’ the successes of others (as attributable results measures often demand), even if they have contributed to them. There is often a political requirement to keep successes ‘under the radar’ to safeguard long-term achievements. On the other hand the communication of achievements can often be important, not least to members of the local networks and coalitions involved, and their supporters, as this can help to mobilise further action and broaden coalitions.  If monitoring and evaluation and associated research is going to meet the demands of multiple stakeholders, and actually lead to program and policy adaptation, then the effective communication of what are often complex processes needs to be a central consideration.

 Often supporting partners and networks to ‘tell their own story’ can not only provide some concrete and verifiable examples of achievements, but can also allow the primary actors to determine which of these they chose to make public. In this sense such an approach can simultaneously strengthen domestic actors in their ability to promote change and providesome of the evidence of change that other might need to satisfy their constituents.

  •  Given the complex, non-linear nature of the change processes involved, in a number of cases more of a research oriented approach to tracking and explaining change over time is often required. As a result a number of the agencies are seeking to partner with sympathetic academic researchers. Separating out some of the longer term research or evaluation work from the more immediate monitoring can protect the  ‘important’ from the more ‘urgent’ demands of some stakeholders, as well as ensuring that hard-pressed program staff are not overwhelmed by expectations they cannot meet.

Whether these approaches are really ‘playing the game to change the rules’ or simply legitimising current approaches will be very much part of our discussions later this month.

Comments are closed.