Public health promoter Agnes Tubuhwa, meeting residents in Agoloto village, the session is primarily intended as an evaluation tool, but also gave villagers the opportunity to feed back concerns to Oxfam, and for Oxfam staff to reinforce hygiene and sanitation messages. Photo: Geoff Sayer/Oxfam

DAC criteria: The hand that rocks the cradle

Methodology, Real Geek 1 Comment

Stephen Porter gives his thoughts on the OECD’s latest consultation on the revision of the Development Assistance Committee’s evaluation criteria (DAC criteria).

Public health promoter Agnes Tubuhwa, meeting residents in Agoloto village, the session is primarily intended as an evaluation tool, but also gave villagers the opportunity to feed back concerns to Oxfam, and for Oxfam staff to reinforce hygiene and sanitation messages. Photo: Geoff Sayer/Oxfam

Public health promoter Agnes Tubuhwa, meeting residents in Agoloto village, the session is primarily intended as an evaluation tool, but also gave villagers the opportunity to feed back concerns to Oxfam, and for Oxfam staff to reinforce hygiene and sanitation messages. Photo: Geoff Sayer/Oxfam

The Organisation for Economic Co-operation and Development (OECD) has launched a consultation on revising the Development Assistance Committee’s (DAC) evaluation criteria through a survey. The DAC criteria are important because they inform how international development is undertaken within commitments on Aid Effectiveness. These include the, Paris Declaration and Accra Agenda for Action and through programme evaluations (see, the evaluation policies of the UK’s Department for International Development (DFID), the South African government, the United Nations Evaluation Group (UNEG), and Oxfam). This means that a large number of aid assessment processes have been attuned to the DAC criteria in some way.

At a base level what gets measured, gets done. If the frame for measurement and assessment is incomplete or has poor guidance then important things will not get done. Thus, development professionals should input into the revision of these criteria as the DAC criteria have power across the international development system.

The DAC criteria are like a hand that rocks the cradle, sometimes imperceptibly, of our development work. I argue that there should be incremental and evolutionary change of the criteria. Their value is that they are value neutral and can be applied by any structure, whether a Ministry of Finance in a one-party state or a rights-based organization. This means that they provide an overarching language, while diverse values are worked through in evaluations. Change can be made to the criteria by identifying one, possibly, two new criteria, revising the existing criteria, and reinforcing good practices in their implementation. In making values explicit in evaluation it is useful to have a common language, this is what a revised DAC criteria would continue to provide.

When I used to convene a Master’s degree course at the University of the Witwatersrand in South Africa, we would have an exercise where the class would generate its own criteria. In facilitating this discussion over four years, the responses would pretty much align to the DAC criteria. To me the overall criteria are in the zone. Recent experience in adaptation from the humanitarian sector, such as using the criteria of coherence, show usefully how new criteria can be added.

In undertaking evaluations, I have both good and frustrating experiences with the criteria. When managing and designing evaluations for DFID, the criteria partly framed the conversation with international non-governmental organizations (INGOs) in which gaps in implementation, organizational values and organizational concerns were explored. The criteria were useful as part of a reflective exploration to focus evaluation scopes within time and budget constraints.

If you really want to know about your ‘impact’ on gender across countries you are going to need a large budget, lots of information and it will take a long time to get rigorous results. If you want to know how ‘relevant’ your work is to gender norms in a small locality the scope is less, but we are likely to need certain contextual expertise.  

If you really want to know about your ‘impact’ on gender across countries you are going to need a large budget, lots of information and it will take a long time

As noted above the current DAC criteria, by design, are value neutral. The explicit incorporation of values to the criteria is an interesting issue. The current OECD-DAC criteria are value neutral, but the values of donors – e.g., value for money, the national interest – can load the criteria so that organizational and local value perspectives are overwhelmed and evaluations become less useful.

Changing the criteria to incorporate different values propositions is not going to change the underlying distribution of power and money. We can, however, improve the extent to which commissioners of evaluations in government, bilateral and multilateral organizations value different voices and perspectives and seek their inclusion in evaluations.

The DAC criteria only have as much power as we choose to give them and the space for negotiation is often larger than recognized. Organizations do, though need the capacity to engage and shape evaluations in a manner that is responsive to their organizational values. The change here is around the framing and guidance of the DAC criteria to enable organizations to have the confidence to focus on integrating and connecting their evaluations to important values, such as human rights-based approaches.

I have had challenges with the criteria. Their parameters are sometimes unclear or outright confusing like the issues, which I think need remedying as follows:

  • Relevance has often been interpreted through an organizational strategy, which led to a tautological discussion – it is relevant because our organization said it was relevant. Relevance and its corollary responsiveness to people are rarely considered in depth.
  • Efficiency is significant because it focuses questions on implementation. I have rarely seen good efficiency measures. This is either because resource is difficult to track or because implementation is genuinely amorphous so that meaningful comparisons are difficult to achieve. It is for these reasons why I think it is important to revise efficiency and expand the criteria focused on implementation, for example, by incorporating ‘coherence’.
  • Effectiveness – This is the criteria that I have found most straight-forward to apply. For me the challenge here is understanding and applying practices that open the box of implementation, rather than with the criteria itself.
  • Impact – Before 2010 I was pretty clear on impact as a long-term measure of positive and negative change. And then I worked on randomized controlled trials (RCTs) where impact could be regarded as immediate, short or long term. Evaluators should prevent the confusion that arises from different definitions of impact. Too often in development we evaluate a project or programme and claim impact in a very narrow sense rather than the broader ecology beyond project or programme parameters.
  • Sustainability is often treated as an assessment of whether an output is likely to be sustained after the end of the project. No one, well, hardly anyone, ever measures sustainability in terms of understanding whether we are meeting the needs of the present, without compromising the ability of future generations to meet their own needs.

None of the above issues are disabling, they can be worked through with a revised framework.

It is important for the organizations like Oxfam and others to engage in the consultation on the DAC criteria. Evolution can help to respond to the issues highlighted here and help them more appropriately nurture development approaches, rather than revolution, which will break things in an unintended manner.

Author
Stephen Porter

Stephen Porter

Stephen Porter is currently the Director for the Learning, Effectiveness and Accountability Department at Oxfam America. Stephen has a range of experience in development practice, including academic, donor and experience applying a rights-based approach to evaluation. Previously, Stephen was Evaluation Advisor for Market Development at the UK Department for International Development and was the Director of the Center for Learning on Evaluation and Results for Anglophone Africa at the University of Witwatersrand. He holds an MPhil in Public Policy from the University of Cape Town. Stephen has eleven journal and book chapters published on the topic of evaluation systems development.