Guest blog by Johannes Meuer, Anne Ellersiek, Daniel Shephard and Christian Rupietta – the evaluation team who undertook a meta review of Oxfam’s effectiveness reviews of policy influence, citizens voice and good governance initiatives.
During the last few years, there has been increasing interest in using Qualitative Comparative Analysis (QCA) as an alternative method for evaluation of policy change or advocacy interventions. QCA draws on set theory and Boolean algebra to conduct a systematic comparative configurational analysis. In short, QCA enables evaluators to identify those combinations of factors that together successfully influence change, providing insights into…
- …multiple successful approaches to policy change (e.g., one approach is to convince key political decision-makers, another one is to raise awareness and strengthen affected groups),
- …the internal composition of successful approaches (which conditions enable groups to take action?), and
- …the varying impact of certain conditions across approaches (e.g., financial support may help achieve policy change in one but impede change in another setting).
As a team of policy consultants, researchers, and methodologists, we used fuzzy-set QCA for a meta review of Oxfam’s Good Governance initiatives (which fall across a spectrum of efforts to strengthen citizen voice and influence policy) to reveal combinations of strategies that successfully widened political space and/or changed policy across the different contexts in which Oxfam works.
In our experience QCA is particularly useful for evaluations of policy influence and citizen voice interventions, for two reasons. First, the configurational nature of the results draws on the strengths of both qualitative (e.g., in-depth case studies) and quantitative methods (e.g., regression analysis), retaining in-depth case knowledge while developing insights that are generic within certain settings. Second, unlike statistical analysis, QCA does not require large datasets to provide robust results. Instead, QCA can also handle small and medium sample sizes that are much more common to evaluations of these kinds of interventions.
Reflecting on our experience in using QCA for this meta-review, and evaluations of policy influencing work more broadly, we identified three crucial challenges for successfully using QCA for evaluation purposes.
Connecting silos of expertise
The first challenge relates to assembling a team of evaluators with sufficient expertise and complementary skills. QCA itself has developed significantly over the past 30 years, the academic community around QCA is growing rapidly, and so is the available methodological expertise. However, QCA is more commonly used in research, and those with expertise in QCA often lack experience in evaluations, and with evaluations of policy influence and advocacy work in particular.
Beyond the methodological expertise, QCA also requires strong collaboration between the evaluation team and practitioners. QCA analysis draws heavily on substantive case-knowledge (i.e., experience in designing policy influencing activities and in their day-to-day monitoring and implementation) to calibrate data. Because evaluators usually do not have such knowledge, conducting QCA requires close collaboration between the evaluators and those with practical experience of policy influencing work and knowledge of case studies in order to integrate these silos of expertise.
Validation exercises
The second challenge, then, lies in aligning QCA with the characteristics of policy influencing work. Regular meetings and validation exercises between all parties involved and the evaluation team are the backbone of any successful QCA-based evaluation. As suggested above, QCA promises valuable results, for example with its approach to data calibration (based on context-specific calibration points) or the insights it generates into profiles of successful policy interventions. Validation exercises offer important opportunities to discuss calibrations, present preliminary results, and confirm the extent to which specific field experience corresponds to certain profiles. Although such validation exercises involve additional time and costs, they are crucial for establishing a strong fit of the practical experience of policy influencing work and the underlying theories of change (TOCs) with the modelling of the evaluation.
Communicating and visualizing results
The third challenge relates to the communication and further validation of QCA results with stakeholders, and particularly with non-technical audiences. A recommended approach is to illustrate profiles by drawing on specific—either most difficult or most typical—cases to highlight the crucial conditions of a profile, and contrast illustrative cases with other experiences. For example, in previous evaluations, we found it useful to develop Avatars resembling different profiles of policy influencing activities. Similarly, radar or configuration charts facilitate the communication of QCA results. Such approaches to communication and visualization seems particularly attractive in the context of evaluations of policy influencing activities because they facilitate validation of findings by raising awareness of how different and often complex configurations of strategies and conditions applied in practice reflect the aggregated typical profiles.
The way ahead
Given the relative newness of QCA for evaluations of policy influencing, advocacy and citizen voice work, we should expect that many more challenges and insights will emerge and our understanding of its application will grow (for example, see www.compasss.org which publishes methodological advancements in QCA in a peer-reviewed working paper series). Methods will continue to shape and be shaped by established approaches to data collection, analysis, and perhaps most importantly, interpretation and communication. A better understanding of how to usefully apply QCA will also clarify the benefits and weaknesses of QCA vis-à-vis other qualitative and quantitative methodologies. Because ultimately, it is not the novelty of a method but how useful it is in addressing the evaluation questions.
For more information, try…
- Befani, Barbara, Ledermann, Simone, & Sager, Fritz. (2007). Realistic evaluation and qca: Conceptual parallels and an empirical application. Evaluation, 13(2), 171-192.
- Befani, Barbara. (2013). Between complexity and generalization: Addressing evaluation challenges with qca. Evaluation, 19(3), 269-283.
- Meuer, Johannes, Rupietta, Christian, & Backes-Gellner, Uschi. (2015). Layers of co-existing innovation systems. Research Policy, 44(4), 888-910
- Meuer, Johannes, Ellersiek, Anne, Rupietta, Christian, & Caves, Katherine. (2016). The MasterCard Foundation Final Aflateen Evaluation: Global report
- Centre for Development Impact blog, Qualitative comparative analysis – an addition to the evaluator’s toolbox?
- Schneider, Carsten, & Wagemann, Claudius. (2013). Set-theoretic methods for the social sciences: A guide to qualitative comparative analysis. Cambridge, UK: Cambridge University Press.
- Ragin, Charles. (2008). Redesigning social inquiry. Chicago: University of Chicago Press.