Sarah is the chairperson of a handicrafts group in Bidibidi Setttlement, Uganda. Photo: Kieran Doherty

What does it mean to be a responsible evaluator? Five key reflections from the 13th EES Biennial Conference

, and

Events, Gender, Real Geek 1 Comment

In this blog, Marta Arranz, Andrea Azevedo and Alexia Pretari share their reflections from the 13th European Evaluation Society (EES) Biennial Conference, commenting on some of the emerging debates, and inviting other evaluators to join in and share their views.

Sarah is the chairperson of a handicrafts group in Bidibidi Setttlement, Uganda. Photo: Kieran Doherty

Sarah is the chairperson of a handicrafts group in Bidibidi Setttlement, Uganda. Photo: Kieran Doherty

It was our first EES conference and we were approaching it with curiosity and excitement. Three intense days ahead. A promising programme, the anticipation of seeing old colleagues and friends, and, if we are honest, some nervousness for our own presentations (learn more about what each of the Oxfam participants talked about here).

In turbulent and critical times, to use the language of the conference, choosing neutrality is in itself a choice. Especially when we see evaluation as an instrument for equity and social justice. But is the evaluation community in agreement on this? What does it mean for us personally, and as practitioners, if we are evaluators who care?  And what does this look like in practice?

At Oxfam, we have been discussing for some time what it means to ‘up our game’ in evaluation. These are some of the strong ideas that were shared at the conference, and that resonated with our own personal and professional soul-searching.

1. Evaluation funders and commissioners, as well as evaluators, can influence the political economy of evaluation.

This relates both to the evaluation process (who decides what gets evaluated, who pays for it, who is involved, who is leading, whose voices matter, whose values matter) and to the asymmetries of power in the thinking and standards of the evaluation field. Adeline Sibanda, President of the AfrEA; Silvia Salinas, coordinator of ReLAC; Sonal Zaveri, Community of Evaluators South Asia; Zenda Ofir, IDEAS; and Nancy MacPherson, former Managing Director for Evaluation at the Rockefeller Foundation, formed a brilliant panel, leading a stimulating discussion and call for action.

Panellists and the audience made great suggestions on what could be done differently: undertaking power analysis in evaluation design and processes, rethinking the composition of evaluation teams so that Southern evaluators have an equal role in producing knowledge, and longer evaluation funding timelines. The audience was also asked ‘do you know any post-colonial thinkers?’ Participants in the session could only name a few, which turned into an invitation to be aware of the gaps in our ways of looking at the world.

2. Evaluation is a political act.

Rethinking evaluation paradigms should translate into nurturing a different profile for evaluation competencies.
We have started to shift from a dominant paradigm where evaluation serves technocracy and good management, to one where evaluation is not only political, but can become a political act in itself. Is the evaluation community up to the challenge? As Silvia Salinas puts it, the dominant paradigm of evaluation is translated into what a ‘good’ evaluator looks like. The sector tends to value evaluators that are good technicians, and recruits mainly on technical skills. Rethinking evaluation paradigms should translate into nurturing a different profile for evaluation competencies.

Silvia Salinas suggested one that balances technical skills with the capacity to make ethical choices and understand the political context of evaluation. Evaluations without self-reflection will risk reproducing social inequalities. These have political and ethical implications, and it’s becoming increasingly difficult to say they don’t.

3. It’s not only about gender – although it is! Let’s bring intersectionality and power imbalances into the conversation.

Thinking politically and ethically in evaluation requires evaluators and commissioners to be more mindful of the various power imbalances in society that we reproduce in our practice and or keep invisible. Gender is one of many elements that shape our position in the world, and several sessions brought intersectional frameworks in.

Panellists highlighted how the intersection of different power dimensions shape positions and experiences in a unique way. An ethical approach not only acknowledges these power imbalances in gender, race and class dynamics, but also creates the conditions to change them. We need to be able to translate this into our choices as evaluators when faced with the recurrent dilemmas of our profession. What is the scope of the evaluation? Who is in and who is out? How do we deal with the trade-offs between rigour, inclusivity and feasibility? We cannot only address these questions as technical issues, they are also ethical ones.

4. Complexity is becoming much more mainstreamed in evaluation.

Failure to acknowledge uncertainty and complexity is not simply a technical error, but also an ethical one
We are still in the early days in finding satisfactory solutions in relation to costs, design, timeframes, methods and approaches. Several voices called for greater transdisciplinary collaboration and suggested that the evaluation community could learn a lot from other fields. The complexity lens brings in the need to think about evaluation from a new perspective, and consider it in the context of a broader programme or policy trajectory.

Evaluations are usually a specific moment in a vast process, rather than a continuous dialogue with learning and moving boundaries. Complexity approaches can help us make this dialogue happen more often, and be more visible.  On the topic of complexity and ethics, a sense of urgency came from Thomas Schwandt’s fantastic keynote lecture, discussing ethical accountability in post-normal evaluation: “Failure to acknowledge uncertainty and complexity is not simply a technical error, but also an ethical one”. By the way, he is also a fan of the You are not so smart podcast that explores self-delusion (a must for all responsible evaluators!).

5. Being a caring evaluator matters.

Thomas Schwandts’ key note connected very well with the conversation around ethics, values, care and evaluation, addressed in heart-warming session with legendary Jennifer Greene, Helen Simons and others. Speakers and participants explored what it means to be a caring evaluator, why this matters and the technical and soft skills needed to do this well. Core business or another evolution step in the ever changing role of the evaluator?

Girl Effect shared their expertise on how to embed privacy, security and safeguarding principles throughout the project cycle, as well as monitoring and evaluation in the digital world. In ICT-supported evaluations and programs, these key points are strongly connected to issues of ethics, values and care. Some argued that if we are serious about these, we should integrate them better in our sector standards and quality assurance frameworks. This will make them more visible and build in the necessary incentives.

It’s not enough to measure, we need to understand

To conclude, a refreshing reminder from the wonderful Zenda Ofir, calling for the need to move away from ‘snap-shot evaluation’ to ‘trajectory evaluation’. In her own words, “It’s not enough to measure, we need to understand”. We take this to mean not only understanding the trajectory of change, but also its unique context; the pre-existing conditions that can explain why change happened the way it did; and how the system adapted to accommodate it.

These are just some of the things we took away from the conference (yes, we know, we are probably highly biased). And now, over to you! Are you also grappling with these questions? What does being a responsible evaluator mean to you?

Author
Marta Arranz

Marta Arranz

Marta is Senior Advisor, Planning, Monitoring, Evaluation and Learning (PMEL) - Influencing and Oxfam GB. With a specific focus on influencing in programmes, she works with others to shape Oxfam's thinking on measuring influencing work across different thematic areas and types of programming.

Author
Andrea Azevedo

Andrea Azevedo

Andrea is a Monitoring, Evaluation and Learning (MEAL) Officer at Oxfam GB. She specialises in the design and implementation of MEAL strategies and frameworks in Women's Economic Empowerment. Informed by feminist and human-rights based approaches, she also advises on how country and global teams can work to build a culture of evaluation where information collected through MEAL systems can be used for adaptive programming, fulfil learning objectives and promote social accountability.

Author
Alexia Pretari

Alexia Pretari

Alexia leads the measuring resilience work for Oxfam Great Britain, developing new tools and methods for assessing resilience capacities; she works primarily on impact evaluation design and implementation. Alexia is passionate about finding ways for impact evaluations to better reflect how power, and its different dimensions and intersections, play out and affect communities, households and individuals (including taking into account intra-household dynamics).