We’re not testing drugs but Oxfam should randomize more

Franziska Mager Inequality, Research

Oxfam should expand its toolkit for understanding support for the problems we care about among the UK public, argues Franziska Mager.

You might think that when it comes to research, Oxfam’s ways of working have near nothing in common with a pharmaceutical company. I argue that the one thing which our public facing teams should adopt from these companies is the basic rationale of how they run medical trials – that is, with randomized groups.

Randomization sounds complicated, but it’s not. Think of it as an intuitive mind game to help you isolate something you think is different between two groups.

Logic dictates that if you repeat the process of randomly assigning people you draw from a big group to two or more groups (a computer usually does the trick), the types of people in each will naturally end up resembling each other. Now, whatever you want to experiment with (like a specific drug), you assign it to one of your groups but not the other. Randomization means that one group of people does not experience the same thing as a comparable group– membership of which is assigned arbitrarily. That’s it. It ensures the only observable difference between them is whatever ‘treatment’ they have received. You can now study the impact of your drug, or of telling a group of people about inequality. For Oxfam this is clearly a powerful tool, both for programme work and for understanding our audiences in depth.

Our audience research processes usually involve conducting regular surveys that capture general support and how well messages – general or specific to a big moment — are received, alongside qualitative methods to dig deeper and unpack people’s reasoning and values. These approaches are extremely important for Oxfam to get a holistic sense of how the public feels about what we do. For example, in the past, focus groups have helped us understand why a supporter (or a sceptic!) isn’t fully convinced by a campaign message. In this situation, randomized research designs can add value in a number of ways.

Firstly, they can reliably quantify what truly makes a difference – for example between employing different pieces of communication material or information, or slogans. Because randomized designs need large samples to detect significant differences, they usually also come with a high degree of confidence in the results. This makes them especially useful at important moments and strategic direction-setting, and complements other efforts to build our evidence base.

Secondly, by design, randomized research is particularly good at measuring what negative or unintended effects something can have Randomization helps us challenge assumptions and forces us to be open minded by rigorously quantifying unintended effects.

randomization has lots of barely tapped potential to understand what moves our supporting audiences
Finally, randomized designs produce high quality evidence – so we can speak with authority in whatever claims we make about the public at large, and we increase our chances of being taken seriously and listened to.

The basic tenets of randomization have spread rapidly in the development sector. Often, projects trying to improve outcomes (think school enrolment) are set up as randomized programmes. One (random) group of young girls is enrolled in a programme, and another similar group isn’t. Then, the difference of interest – think vaccination rates – is measured and compared between both groups and can be traced back to the enrolment in the programme. Use of randomization in this way is hotly contested for reasons of ethics and specificity to a certain context. Randomized research designs isolate what we are interested in. Increasingly, when this isn’t possible from the outset, evaluations try to simulate this retrospectively – including Oxfam’s own evaluations.

In short, beyond programme work, randomization has lots of barely tapped potential to understand what moves our supporting audiences. In support of our latest inequality report, Reward Work, Not Wealth, Oxfam has started experimenting with this approach. In an online survey rolled out in 10 middle and high income countries, Oxfam asked representative samples of people about their existing perceptions of the scale of inequality, their concern about it as well as their attitudes towards redistribution and government action. Crucially, in the process, we randomly administered select information about inequality to some people (for example, how much wealth the richest hold). This research is a great example of the value added by this method.

For the first time, we are now able to see the direct impact of telling someone about inequality – we can go back to this group and measure if their answers are any different to those who didn’t receive the information about inequality, and the results are fascinating. For example, we found proof of a backfire effect in the US whereby telling people about inequality frequently reduces their stated support for redistributive policies and their concern about inequality overall.

Download the academic paper, ‘Can information about inequality and social mobility change preferences for redistribution?’, to find out more.

Author

Bhavika Patel