Jim Coe and Rhonda Schlangen, experienced advocacy evaluators, discuss the evolving challenge of monitoring, evaluation, accountability and learning in the context of influencing work.
Over the last couple of decades, there has been a seismic shift in thinking about evaluating influencing work. As we argue here, we need to think critically about where we’ve got to and how to manage the tension between wanting the answers promised by rigorous planning and evaluation processes, and the inherent uncertainties around how social and political change really happens.
We first worked together nearly 20 years ago. At that time, thinking about how to track and assess campaigning and advocacy was pretty much in unchartered territory and there was a widely-held notion that it was not possible to evaluate advocacy.
That consensus gave advocates massive latitude. Campaigns often had wildly ambitious goals, without a clear idea of how to get from A to Z. Experienced campaigners who knew what they were doing flourished, unshackled, but there was limited accountability, very little room for learning, not much ability to identify good practice, or whether things were working or not.
These days, things are very different. There are strong pressures (mainly from senior managers and donors) on advocates to have defined plans, set out ‘SMART’ objectives, report against (mainly pre-) set outcomes and indicators, and show tangible results. (Oxfam recently conducted a meta-review of its influencing intiatives)
We have elevated theories of change as a planning tool, and invested highly in formalized MEL systems. And there is great excitement any time a new tool or method emerges that promises to somehow override the inherent uncertainty of advocacy.
These shifts have brought some discipline to influencing strategies and plans and have been helpful in reining in the ‘anything goes’ culture.
It’s good to be rigorous and systematic, and to have a defendable basis for decisions made. It’s right that advocates are accountable for the resources they are responsible for. And more linear, bounded ways of thinking about advocacy and social change may work well in (or, at least, not be totally inimical to) some contexts – targeted interventions to achieve discrete policy change in sub-systems where there are only a limited group of actors, for example.
much social change is multi-dimensional, about more than policy change. It’s about power relationships, social norms, behaviours and practices. It’s unpredictable, messy, inherently complex.But much social change is multi-dimensional, about more than policy change. It’s about power relationships, social norms, behaviours and practices. It’s unpredictable, messy, inherently complex. And it’s both a truism and a heresy to say that in many such contexts some things are necessarily unknowable. So attempts to banish uncertainty – from plans and then from assessments of achievements and effectiveness – create an uneasy relationship with the realities of social change.
We have learned a lot collectively about what works in advocacy evaluation over the years, and this learning is taking place within broader conversations exploring complexity and systems thinking and models of adaptive management.
And yet we remain attracted by approaches to advocacy planning, monitoring and evaluation that are predicated on, and encourage, a false sense of predictability and control.
Life would be a lot easier for the advocacy community, and the advocacy evaluation community, if influencing work was tangible and measurable. We could plan with more confidence, exert control over the progress of events, show unambiguous results, know what works.
But wishing this was true doesn’t make it so.
So rather than trying to eliminate uncertainty, and the need for critical judgement, it would be better if we factored these things back in, and focus on how best to navigate these challenges, rather than trying to evade them.
We are not saying go back to the ‘Wild West’ days of evaluation-free advocacy. We can build on positive developments, but we need to make sure they are better reflective of reality and in line with the nature and needs of advocacy.
This aspiration for how things might evolve is summarised in the table below:
20 years ago | Now | Where we should move to | |
Advocacy Objectives | Vague and lofty aspirations | Short-term and measurable | Plausible ambition |
Advocacy Planning | Black box | (Often reductionist) causal logic | Influencing interventions as part of an eco-system |
Tracking | No information | Lots of information | Tracking what’s important |
Monitoring | Ad hoc | Highly formalized, overlaid on campaigns | With the rhythms of the campaign |
Accountability | No accountability | Skewed upwards | Multiple accountabilities, to partners and communities as well as funders and managers |
Underlying premise | “It’s impossible to evaluate advocacy” | A search for certainty | Accommodating uncertainty, valuing (space for) interpretation |
Essentially, these shifts boil down to resisting the lure of tools and methods that assume, or promise, certainty – and instead better accommodating the unpredictability of change and the uncertainties of measurement, and investing in approaches that better respond to these realities.