Technical learning from the meta-analysis of women’s empowerment projects

Kristen McCollumGender, Methodology, Real Geek

Kristen McCollum, consultant at the International Fund for Agricultural Development (IFAD), shares with us the learning from Oxfam’s meta-analyses on women’s empowerment.

When we first decided to conduct a meta-analysis on women’s empowerment Effectiveness Reviews (ERs), the idea was to go where no impact evaluation had gone before. While the Effectiveness Reviews give us a rigorous measurement of the impact of specific projects in their contexts, we wanted to be able to draw conclusions across studies for an organisational perspective, as well as learning about measurement and impact from a birds-eye perspective.

Results were enlightening in some unexpected ways, both of their implications for programme learning, as well as for their technical implications on measurement. We already presented the main results and programme learning in a previous blog. Here we outline four major technical learning we obtained from the meta-analysis.

1) There’s power in numbers

Sample sizes for impact evaluations are always calculated with their potential power in mind.  That is, we make sure that the number of people interviewed is sufficient to be able to detect statistically significant impact at a level we think is substantial.  Two caveats here: first, we all know that budget is often an equally powerful determinant of sample size. Second: as women’s empowerment is notoriously difficult to measure, it is difficult to define when an impact is ‘large enough’ to be deemed important to detect.

The meta-analysis gives us some effect sizes we can use for conducting power analysis and drawing conclusions on the required sample size for future studies. Moreover, the beauty of meta-analysis is that it acts as value-added research which takes advantage of the latent power hidden in a collection of impact evaluations.  Though one study might not have the sufficient sample size to detect the impact, a combination of studies might. This is exactly what happened with our analysis.

Of the nine ERs that examined the impact on violence against women, not a single one detected a significant change in women’s experience of violence. However, meta-analysing the studies allowed us to find an impact that was too small to be detected by individual studies. Overall, women in the projects were more likely to report having experienced violence than women in comparison groups. Exploring the data further, it seems likely this is at least in part a result of the way violence was measured — leading us to our next technical lesson: 

Meta-analysing the studies allowed us to find an impact that was too small to be detected by individual studies

2) Stronger measurement tools

Seven of the ER’s measuring experience of violence employed the neighbourhood method.  With this method, women are asked if they or anyone close to them had experienced physical, sexual, or emotional violence. While this method allows us to ask about violence in a more sensitive manner suited to particularly vulnerable contexts, it may give results that are difficult to interpret. Are women reporting more violence because it is happening more often, or are more empowered women more likely to identify violence (particularly emotional violence) happening to their peers?  This question is difficult to answer now. What we learned is the need for strengthening our measurement of violence and for programmes to more accurately detect, prevent and manage any possible unintended negative impact on violence. As a direct response to these findings in the most recent Effectiveness Reviews of women’s empowerment projects, we moved to investigate direct exposure to violence. We also need to pay more attention on how to capture this important information.

3) Flexibility does not prejudice results

In addition to the above, other measurement questions emerged while conducting the analysis.  A meta-analysis can give us overall effect sizes, but also allow us to regress the effect sizes of our ERs on a variety of characteristics to see if the way we construct the Women’s Empowerment Index drastically influences its output. Since the Index is meant to be a flexible tool that allows for a context-specific measurement of empowerment while still being reasonably comparable across projects, we wanted to verify that small changes to the Index wouldn’t be a determinant of the impact it detected. For example: is the Index more likely to result in a positive measure of empowerment if it includes more indicators? Here, we don’t find any statistically significant difference. So what does this mean? It means so far, there’s nothing to suggest that the differences in the construction of the Index between studies are influencing the results we get. This is good news — the flexibility of the Index doesn’t appear to be compromising our results.

4) Let’s measure impacts not outputs

Lastly, a final finding called our attention to the underlying assumptions of how we conceptualize empowerment. We realised that 13 of the ERs had used group participation measurement in their Index. This is pretty standard, and in line with how OPHI and IFPRI measure empowerment. However, this may not be as appropriate in the context of our Effectiveness Reviews, or impact evaluations more in general. This is because the vast majority of the evaluated projects have a group-forming component. So we would expect this indicator to always have a positive effect on the treatment group, as a de-facto condition of their being project participants (indeed, the overall effect size was large and significant). So do we leave it out? Well, it probably depends. Participation in groups is a key facet of empowerment, but it is important to capture the impact of that participation without simply throwing in an output measurement into an impact-focused Index. So in future Effectiveness Reviews, we will give priority to indicators such as ‘influence within groups’ rather than ‘group participation’, and we will decide on whether to include group participation if this not a pre-condition for project involvement.

To conclude, the meta-analysis was not only useful for measuring organisational impact, but also to direct the future of how Oxfam measures women’s empowerment and the assumptions we make about the underlying mechanisms of projects.

Have you seen examples on how meta-analysis can inform measurement at an organisational level? Do you have insight into measuring gender-based violence, women’s empowerment, or the impact of group participation?  Tell us about them or leave us a comment below.

Download the Impact evaluation of the project ‘AMAL: Supporting Women’s Transformative Leadership’ in Tunisia
Author

Veronica Rodriguez Jorge