Who is asking whom? Does it matter?

, and

Real Geek

Who is asking wom? Does it matter?

In this blog we look at data from DRC, Zambia and the Occupied Palestinian Territory to see how interviewer and interviewee characteristics, especially gender, affect household-level information. Gender is one important factor shaping inequalities of power at play across scales, in private and public spheres and across contexts. In carrying out quantitative impact evaluations at Oxfam, we have been working to shed light on power dynamics that underpin structural social inequalities.  

We are working to put intersectionality at the fore of our evaluations. Race, ethnicity, disability status and more shape power and privileges, and intersect with gender. We have been moving away from household-level analysis that invisibilizes these inequalities, and giving more attention to individual characteristics and intrahousehold dynamics. There is growing recognition that we need to ‘open up the household’, following the work of feminist economists. 

Does interviewee and interviewer gender and other characteristics effect the data?  

First, does the gender of the interviewee alone affect household-level data? 
Based on data gathered in Mexico, Adan Silverio-Murillo and Edmundo Molina found that a significant share of men and women in the same household – specifically, husband and wife – report household-level information, such as asset ownership, differently.  

Second, how might the social interaction between interviewee and interviewer, particularly in relation to gender, affect household-level survey data?  
There are many discussions about the ‘interviewer effect’ (often referred to as ‘enumerator effect’, see this blog for example). Using data gathered in Uganda, Michele di Maio and Nathan Fiala highlight that this effect is large for questions asked at the individual level. For example, interviewee responses on political preference can vary based on interviewer characteristics like gender, years of interviewing experience and whether they live in an urban area. They also found that the interaction between interviewer and interviewee gender and education play a role.  

How have we (tried) to answer these questions? 

To understand how women and men may experience aspects of their lives and benefit from interventions differently, we have been alternating talking with men and women in each sampled household using random variation (instead of surveying the self-identified ‘household head’ or most knowledgeable person on a topic, irrespective of gender). 

In 2018 and 2019, we conducted three Effectiveness Reviews – in the Democratic Republic of Congo (rural North Kivu), Zambia (peri-urban Lusaka) and the Occupied Palestinian Territory (area C of the West Bank) – where we introduced this random variation in interviewee gender while having a mixed team of interviewers where the setting allowed (in the West Bank, variation is only for men interviewees). This random variation allows us to: 

  1. isolate the effect of the gender of the interviewee on household-level information and  
  1. look at how the interaction of interviewee and interviewer gender affects household-level measurements.  

Around 18% of households had either no adult man or no adult woman (e.g., single parent with children). We excluded these households from the analysis presented below since the random assignment could not be followed.  

We combined the data from the three Effectiveness Reviews and looked at common household-level data gathered in all three places: number of household members and household income sources (simple measurements, which we would not expect to vary) and a wealth index (a more complex measurement, which could be more prone to variation). This wealth index is a composite indicator, based on housing conditions, infrastructure access and household ownership of various assets, which are mostly reported by the interviewee. In each country, the index adapted to be locally relevant and standardized to allow comparison of z-scores across contexts.  

The final combined dataset represents 2,172 people. We ran several regressions. 
The household-level measurements were evaluated as four separate dependent variables. 

  1. number of household members 
  1. household has income from salaried employment 
  1. household has income from remittances and 
  1. standardized household wealth index z-score  

For each of these, we considered nine different combinations of independent variables. We reviewed models with interviewee characteristics only – gender, age and primary school completion. We also compared these results to models with interviewer gender and/or an interviewee-interviewer gender interaction term, both with and without country fixed effects and clustering at the community level.  

Figure 1: Visual representation of regression analysis (top) with example Stata code (bottom) 

So, does the interviewee gender affect household-level measurements? 

Based on this analysis, not really, no. There is no impact of interviewee gender on household-level measurements, at least not for the three relatively simple indicators we looked at (i.e, household size and income sources). The situation is slightly more complex for the wealth index, as we do see an impact for interviewee gender alone – wealth reported by women is slightly lower, on average – but this difference disappears with additional control variables and fixed effects. 

And how about the interaction between interviewee and interviewer gender? 

In part, yes. While we saw no effect from the interaction between interviewee and interviewer gender on the simple indicators, like household size (see image below, both lines slope upwards), this interaction does seem to affect the wealth index (see image below, the lines slope in opposite directions). 

Gender pairing resulted in similar scores in both cases (woman interviewee – woman interviewer, man interviewee – man interviewer). Meanwhile, scores are slightly higher but also similar when genders are not paired (woman interviewee – man interviewer, man interviewee – woman interviewer).  

Figure 2: Predictive margins plots showing how the interaction between interviewer and interviewee gender affect data gathered on the number of household members (left) and wealth (right) 

What do we do with this information?  

Overall, this analysis helps us qualify limitations of analyses at the household level and sub-group analyses when household-level information is used, for example, when accounting for baseline differences, as is the case in most quantitative Effectiveness Reviews. To be clear, we are not saying anything about accuracy. Rather we are looking at how lived experiences, which are shaped by structural inequalities, may influence information gathered. 

The results suggest that pairing interviewee and interviewer gender is one way to reduce variation in household-level measurements. We have already been following this approach when needed to create a safe(r) space, particularly when people may share sensitive stories and gendered experiences.  

But this analysis gives another reason to adopt this approach, which we have used depending on gender norms in each context. This understanding can contribute to refinements in data gathering protocols that adopt a stronger feminist perspective, including how we identify who to interview and who is interviewed by whom. It can also inform interviewer recruitment and training choices. Clearly, this analysis is currently limited to a binary understanding of gender – what would it mean for our data gathering protocols to safely move beyond binary in our impact evaluations? We are working on this and hope to share more, and would also love to hear your ideas. 

More broadly, these findings highlight the need to pay attention to identities and dimensions of social inequalities, and how they play out in different contexts to shape social interactions, as we carry out research and evaluation work. While these results suggest the biases are small, they cannot be ignored. Data gathering is indeed the product of a social interaction. We must continue to better understand and consider how gender intersected with class, race and how other dimensions influence what we hear from whom, depending on who is asking. 

For example, Priya Mukherjee shows that priming caste to make this aspect of one’s identity salient before the interview, affects responses related to long-term aspirations, beliefs and education outcomes among teenagers and their parents in rural Rajhastan, India. If the interaction with an interviewer from a different background acts as a primer of specific aspects of one’s own identity, various aspects of the identity of both the interviewers and interviewees could affect the data gathered, even if they have the same gender identity. Considering the nuance and complexity of identities and adopting intersectional lenses are critical moving forward.  

So yes, who asks and who answers matter. But maybe not so much in the way we thought it would for the generation of household level information. 

A final reflection 

You may have noticed that we are actively trying to move away from the language of ‘enumerator’ and ‘respondent’. These terms feel like an impersonal reduction of what the roles actually entail and imply an interaction where the ‘enumerator’ has power over the ‘respondent’. Interviewer and interviewee seem a bit better, but we would be happy to hear alternative suggestions. 

 Having spent a lot of time with teams of interviewers and gathering data ourselves, we have experienced what it takes to conduct interviews. The critical role that interviewers and interviewees play in research and evaluation is hugely underacknowledged and undervalued. We highly recommend this blog serieswhich is challenging dominant research practices in so many ways.


Cecile Pomarede

Cécile joined Oxfam’s Strategic Leaning and Impact Evaluation Team during the summer 2019 to take part in the research project Data generation as social interaction: who is asking whom. She is a student in Politics, International Relations and Quantitative Methods, soon graduating from the University of Warwick.

Alexia Pretari

Alexia Pretari

Alexia Pretari is co-leading Oxfam GB’s impact evaluation work. Alexia is passionate about finding ways for research and evaluation to better reflect how power, social inequalities and intersecting identities play out in different contexts as well as in the research and evaluation process itself.

Jaynie Vonk

Jaynie Vonk

Jaynie conducts Effectiveness Reviews and other impact evaluations with Oxfam. She promotes evidence-based learning, accountability, and understanding of programme results, leading on the development of sustainable water and sanitation measurement tools.