The good and the ‘not so good’ of our experiences with SenseMaker

Franziska Mager Methodology, Real Geek

When we purchased a license for the SenseMaker research method in early 2017 (a proprietary data collection and analysis software), the excitement in our more “geeky” teams was palpable.

SenseMaker is a research method inspired by the Cynefin complexity framework, which is known for helping make sense of, and categorizing messy or ambiguous situations that require some sort of decision-making. In a nutshell, it’s a narrative-based method of research.

Like other narrative approaches, it includes aspects of quantitative and qualitative methods. Any application starts with a personal experience prompted from an interviewee in response to a short, pre-defined question around a topic of interest. For example: “Describe a recent experience or situation which made you feel motivated or discouraged about your professional future.”

This question remains the same for all respondents in any given sample. Each narrative shared by a respondent becomes the foundation which will be used to answer pre-set follow-up questions. Some of these resemble a typical household survey roster, like Multiple Choice Questions (MCQs) capturing demographic data or assets. Other question types are specific to SenseMaker as they rely on a visual digest, such as triads. They are aimed at encouraging the respondent to think about the presence of several things that are not mutually exclusive, i.e. all exist, to different extents, at the same time.

Well-designed follow up questions, when collected from a large respondent sample, can reveal additional layers of meaning connected to the experience shared in the narrative itself (such as values, ideas or beliefs). Check out this publication, it illustrates how Oxfam have used the method to better understand the issue of truly decent working conditions.

How has it gone for us?

Having been involved in all our case studies to some degree, I will try to show where I think the method has added value. At the same time, a more standard research tool could sometimes have delivered the necessary insights with less effort and time spent. This counterfactual is important. Oxfam is a fast-paced, curious organization but also one that is always short on resources.

It is difficult to discern the commonalities between highly exploratory applications, trying to shed light on issues like decision-making during displacement and those, on the other end of the spectrum, with a clear evaluation mandate.

But here we go:

The good

It gives voice to people who are not normally heard, and involves them to a higher degree. Some research methods amplify people’s lived experiences more effectively than others, yet this is essential for Oxfam’s mandate of listening to communities. Compared to purely qualitative tools ( (because of the quantity of narratives collected that go beyond mere anecdotes) or even more so, quantitative ones, SenseMaker puts the voice of the respondent front and centre. And compared to large-n surveys especially, it is less extractive in its process as it encourages people to share their experiences and give meaning to a topic of enquiry in a highly participative manner and in their own terms.

In what is a nightmare for many, SenseMaker data is highly ambiguous, requiring a flexible approach to interpreting data. How we understand personal narratives,especially when there are many hundreds or more, is subjective. And how data is dispersed for visual question types like triads doesn’t always follow any clear logic. There is no handbook for data analysis. As a researcher, this really takes me out of my comfort zone. But the flipside is that, under the right conditions, this ambiguity can support agility in thinking, because it entails exploring areas that weren’t previously even known as needing further research. SenseMaker was designed to go beyond the average or typical experience and purposefully consider unusual trends, uncovering differing perspectives and views is encouraged, the average matters less. This can be helpful under the right conditions, for example for planning adaptive M&E and management. Sometimes, this attribute has resulted in insights that I don’t think would have easily become visible through other methods.

… and the less good

Data interpretation of a SenseMaker project is ideally to be done in a collaborative way, involving different stakeholders to uncover what is relevant to those people. Due to the limited data visualisation and analysis software tools SenseMaker currently offers, replicability, a central tenant of scientific thinking, really goes out of the window.  Data analysis isn’t typically reproducible, i.e. allowing another person working with the same material to arrive at the same conclusions by following for example a protocol, piece of code or other method. I haven’t achieved documenting my analyses processes to the point of allowing someone else to replicate them. Therefore, the results I’ve worked with have rarely felt incontestable, someone else may have ‘seen’ something else in the data. As a workaround, when a high degree of confidence is needed, we have analysed the data to test for significant differences through third party software (for example in this Effectiveness Review on resilience, and this one on livelihoods).

Because of how a signification framework is constructed and how collaboratively it should be completed, SenseMaker prides itself in reducing social desirability bias, a type of bias by which respondents feel pressured to give the ‘right’ answers. But even highly trained SenseMaker enumerators can prompt this kind of bias in respondents. More importantly, SenseMaker encourages another type of bias. A visual digest of how data is distributed in the typical triads, dyads and canvasses means that for those looking at the resulting data, there is the added risk of unwitting cognitive bias such as clustering illusion,  when something seems like a pattern, but is actually random, because we unconsciously look for meaning.

Last but not least, the technology behind SenseMaker doesn’t allow for the best user experience, especially on Android devices. This is a problem especially when working with poor internet connectivity, and under time constraints,  both typical for fieldwork.  On the flipside, looking at many personal stories and then exploring what indicators respondents choose to qualify their situation is the method I’ve worked with that has probably most successfully made me question my own assumptions.

What is your experience with SenseMaker?

Author

Franziska Mager