Oxfam campaigners holding Stand as One signs in the crowds at Glastonbury festival, June 2016. Credit: Marc West/Oxfam

Accommodating uncertainty in advocacy and campaign evaluation

and

Methodology, Real Geek Leave a Comment

Jim Coe and Rhonda Schlangen, experienced advocacy evaluators, discuss the evolving challenge of monitoring, evaluation, accountability and learning in the context of influencing work.

 Oxfam campaigners holding Stand as One signs in the crowds at Glastonbury festival, June 2016. Credit: Marc West/Oxfam

Oxfam campaigners holding Stand as One signs in the crowds at Glastonbury festival, June 2016. Credit: Marc West/Oxfam

Over the last couple of decades, there has been a seismic shift in thinking about evaluating influencing work. As we argue here, we need to think critically about where we’ve got to and how to manage the tension between wanting the answers promised by rigorous planning and evaluation processes, and the inherent uncertainties around how social and political change really happens.

We first worked together nearly 20 years ago. At that time, thinking about how to track and assess campaigning and advocacy was pretty much in unchartered territory and there was a widely-held notion that it was not possible to evaluate advocacy.

That consensus gave advocates massive latitude. Campaigns often had wildly ambitious goals, without a clear idea of how to get from A to Z. Experienced campaigners who knew what they were doing flourished, unshackled, but there was limited accountability, very little room for learning, not much ability to identify good practice, or whether things were working or not.

These days, things are very different. There are strong pressures (mainly from senior managers and donors) on advocates to have defined plans, set out ‘SMART’ objectives, report against (mainly pre-) set outcomes and indicators, and show tangible results. (Oxfam recently conducted a meta-review of its influencing intiatives)

We have elevated theories of change as a planning tool, and invested highly in formalized MEL systems. And there is great excitement any time a new tool or method emerges that promises to somehow override the inherent uncertainty of advocacy.

These shifts have brought some discipline to influencing strategies and plans and have been helpful in reining in the ‘anything goes’ culture.

It’s good to be rigorous and systematic, and to have a defendable basis for decisions made. It’s right that advocates are accountable for the resources they are responsible for. And more linear, bounded ways of thinking about advocacy and social change may work well in (or, at least, not be totally inimical to) some contexts – targeted interventions to achieve discrete policy change in sub-systems where there are only a limited group of actors, for example.

much social change is multi-dimensional, about more than policy change. It’s about power relationships, social norms, behaviours and practices. It’s unpredictable, messy, inherently complex.
But much social change is multi-dimensional, about more than policy change. It’s about power relationships, social norms, behaviours and practices. It’s unpredictable, messy, inherently complex. And it’s both a truism and a heresy to say that in many such contexts some things are necessarily unknowable. So attempts to banish uncertainty – from plans and then from assessments of achievements and effectiveness – create an uneasy relationship with the realities of social change.

We have learned a lot collectively about what works in advocacy evaluation over the years, and this learning is taking place within broader conversations exploring complexity and systems thinking and models of adaptive management.

And yet we remain attracted by approaches to advocacy planning, monitoring and evaluation that are predicated on, and encourage, a false sense of predictability and control.

Life would be a lot easier for the advocacy community, and the advocacy evaluation community, if influencing work was tangible and measurable. We could plan with more confidence, exert control over the progress of events, show unambiguous results, know what works.

But wishing this was true doesn’t make it so.

So rather than trying to eliminate uncertainty, and the need for critical judgement, it would be better if we factored these things back in, and focus on how best to navigate these challenges, rather than trying to evade them.

We are not saying go back to the ‘Wild West’ days of evaluation-free advocacy. We can build on positive developments, but we need to make sure they are better reflective of reality and in line with the nature and needs of advocacy.

This aspiration for how things might evolve is summarised in the table below:

 20 years agoNowWhere we should move to
Advocacy ObjectivesVague and lofty aspirationsShort-term and measurablePlausible ambition
Advocacy PlanningBlack box (Often reductionist) causal logicInfluencing interventions as part of an eco-system
Tracking No informationLots of informationTracking what’s important
Monitoring Ad hocHighly formalized, overlaid on campaignsWith the rhythms of the campaign
AccountabilityNo accountabilitySkewed upwardsMultiple accountabilities, to partners and communities as well as funders and managers
Underlying premise“It’s impossible to evaluate advocacy”A search for certaintyAccommodating uncertainty, valuing (space for) interpretation

Essentially, these shifts boil down to resisting the lure of tools and methods that assume, or promise, certainty – and instead better accommodating the unpredictability of change and the uncertainties of measurement, and investing in approaches that better respond to these realities.

Author
Jim Coe

Jim Coe

Jim specialises in advocacy strategy and evaluation. Over 15 years as a freelance consultant, he has built up extensive experience of working with national, regional and international NGOs, foundations, and advocacy networks. Jim blogs and has written several books and articles about, advocacy and evaluating advocacy. Prior to going freelance, Jim worked in the campaigns department at Oxfam GB, latterly as Strategy Development and Learning Manager.

Author
Rhonda Schlangen

Rhonda Schlangen

Rhonda is a U.S.-based independent evaluation consultant with over nineteen years’ experience in the fields of public policy and evaluation. She provides evaluation services to organizations ranging from civil society organizations and networks to funders. Her passion and priority is helping social change agents ensure that advocacy and service programs benefit from current innovations in the evaluation field.