Evaluation for strategic learning and adaptive management in practice

Kimberly Bowman Methodology, Real Geek

Kimberly Bowman summarises some of the discussion and insight from a session on evaluation for adaptive management at the recent European Evaluation Society conference.

‘Adaptive programming’ (a.k.a. adaptive management, adaptive aid) is a hot topic, explored in a number of insightful reports, blog posts, learning initiatives and even manifestos.  Many of us sitting in internal monitoring, evaluation and learning (MEL) teams are watching the debate closely, seeing close parallels between the latest “fuzzword” and our ongoing efforts to embed evidence-based learning and insight into different ‘levels’ of projects and our organisation.

Developmental evaluation and strategic learning approaches suggest that systematic inquiry and reflective practice can combine to help teams and organizations learn quickly – almost in ‘real time.’ Evaluators have been engaged by questions of use and utility for decades – but the formal linking of evaluation, strategy and management in short time periods is a newer demand.

We took the opportunity at the recent EES conference to convene a roundtable on the topic, asking participants to draw upon their own experience of this type of evaluation in practice. Participants included representatives from European development agencies and foreign ministries, international organisations, academics and a few experienced consulting firms.  While we didn’t reach any hard and fast conclusions – that wasn’t the point of the session – a number of useful questions and insights emerged:

  • The end of ‘chunky’ and the rise of ‘bite-sized’ evaluation? Large-scale evaluations – for example, of organisational strategies – require a lot of time and energy. MEL for adaptive management requires evaluative activity that is ‘smaller’ – or at least, flexible and adaptable to respond to changes in context or organisational need.  This might mean mid-term (formative) evaluations – or even more ‘bite-sized’ approaches or evaluative, linked together.
  •  Evaluation to influence policy: Large, comprehensive, well-designed ‘clunky’ (as some participants described them) evaluations are more likely to be seen as credible, and have the potential to influence policy. Evaluation for adaptive management holds the promise of influencing practice – but may be less well suited for broader influence.
  • People side matters: Many participants discussed the barriers to evaluation uptake in their organisations/projects:
  1. In some places, the volume of current demands for information – reports to management, to donors –  has left many people with ‘review fatigue.’
  2. People are trained (and rewarded for working) in rigid project management styles. In some cases, training might be needed to build analysis skills. Other times, it may simply be a case of giving people the power to manage learning and decision making at their own level.
  3. Leaders and culture matter. In some large organisations (including government ministries), new ways of working involve high levels of personal risk. When learning and adapting (doing things differently) is understood as a bad thing – as “subverting the existing way of doing things,” leaders need to take responsibility for changing this. Evaluators can only do so much.
  • Lots of tensions to manage: There are many tensions that arise if we aim to use evaluation in this way – including:
  1. Privacy, transparency and responsible data:  As we gather, analyse and use data with increasing frequency – and ideally, allow for its use among more actors – we raise new challenges in ensuring that data is properly collected, stored, managed and disposed of. How do we ensure that the right data is easily accessed and used by a range of stakeholders – while protecting the privacy of, for example, people who are identified or simply reflected in the data?
  2. Agility and control: Practicing adaptive management often means decentralising – pushing ‘sense-making’ and decision-making down to the lowest appropriate level. This may mean trade-offs for those ‘higher-up’ organizational food chains – data that is useful to field teams may not be that sought by HQ. Evaluators can expect to navigate these tensions, particularly when trying something new.
  3. Humans and machines: New technologies and new data products bring with them lots of potential insights – however machines can only do so much. Humans need to be involved in understanding how, when and what information might be useful to their particular team, location and work.  How can we design evidence-informed management systems that make the most out of ICTs and human beings?

Thanks to session participants for their views. If you’d like a closer look at the discussion, photos of our summary flip-charts above.  I think we only scratched the surface of the topic, so I’d love to continue the conversation (or get some links to other resources or perspectives) in the comments.

Author

Jaime Atienza