Adinda Van Hemelrijck reflects on how we adapted PIALA for an impact evaluation in Myanmar. n 2015, Oxfam commissioned an impact evaluation of its resilient livelihoods project in the Dry Zone in Myanmar (part of the Effectiveness Review series). This project worked to develop a village-level mechanism – called Membership Organisation (MO) – that could facilitate broad citizen participation in local …
Measuring time: Comparing questionnaire designs
Simone Lombardini compares duration, estimates and enumerator’s bias from two different time-use survey modules from the same impact evaluation survey in Indonesia. npaid care work and ‘ Time Poverty‘ are increasingly recognised as relevant to development efforts, and interest in measuring time-use data is growing. However, gathering information on time use is not easy; time-use modules are known for being …
To cluster or not to cluster? Not to cluster…
Getting standard errors right is important for anyone trying to do quantitative impact evaluation. We want to know that any impacts of the project that we observe are real, rather than just the result of random variation in the data. In this blog post, Jonathan Lain focuses on one particular aspect of calculating standard errors that has proved a real …
Building trust through accountability
Oxfam’s Humanitarian Informal Feedback project funded by the Humanitarian Innovation Fund (HIF) has recently come to a close. Here Project Manager Emily Tomkys shares the evaluation report and delves deeper into one specific finding: the link between trust and accountability. eceiving feedback from communities that organisations like Oxfam work with is an essential pillar of ensuring our work is responsive …
Geek out with us @ MERL Tech London
MERL Tech – an event exploring the use of technology for monitoring, evaluation research and learning – is coming to London for the first time in February 20-21st 2017. Hosted at the St Brides Foundation by Oxfam and Comic Relief, we are calling for practitioners to submit session ideas and to sign up to attend and become part of our …
Thinking outside the baseline
Dustin Barter takes us through Oxfam’s baseline management for a Durable Peace Programme in Myanmar and lays out how, this time, our approach was so different to the status quo. llocate some funding, get a consultant to collect data and write a report, and there you have it, baseline done. Share it with the donor and if they’re happy, contract …
Wanted! MEL specialist on fragile and conflict affected contexts
Marta Arranz reflects over the role of monitoring and evaluation in fragility and conflict programming and talks about a new exciting role in Oxfam GB. onitoring, evaluation and learning is a vital part of Oxfam’s work in fragile and conflict-affected contexts. We’re looking for a creative, experienced technical specialist to push our thinking and practice on monitoring Evaluation and Learning …
Evaluation for strategic learning and adaptive management in practice
Kimberly Bowman summarises some of the discussion and insight from a session on evaluation for adaptive management at the recent European Evaluation Society conference. ‘Adaptive programming’ (a.k.a. adaptive management, adaptive aid) is a hot topic, explored in a number of insightful reports, blog posts, learning initiatives and even manifestos. Many of us sitting in internal monitoring, evaluation and learning (MEL) …
Measuring indirect beneficiaries: Attempting to square the circle?
Marta Arranz introduces the challenging topic of measuring indirect beneficiaries as part of Oxfam’s efforts to better measure influencing work. s someone who works on M&E of influencing, I am interested in how programmes and campaigns can estimate who and how many people they actually benefit, particularly those who benefit indirectly, without being directly engaged with project activities. Don’t miss …
The politics of inequality; who is measuring what and why?
Our latest real geek instalment explores different measurements of inequality and how our understanding of the data they produce is crucial to the issue as a whole. here is no one ‘right’ way to measure anything. That’s what measurement is; one way to quantify out of many. There can of course be a wrong way and plenty of statisticians work …