Showing posts with label qualitative data analysis. Show all posts
Showing posts with label qualitative data analysis. Show all posts

Sunday, 30 August 2015

On rigour, numbers and discovery


Recently, I was asked the following:

In talking to my psychology seminar group about their qualitative lab I ended up looking at Helene Joffe’s book chapter on thematic analysis.  She suggests including diagrammatic representations of the themes, together with quantitative data about how many participants mentioned the theme, and it’s subparts.  This appealed to the psychology students because it gives them quantitative data and helped them see how prevalent that theme was within the sample.

And then today I saw another paper “Supporting thinking on sample sizes for thematic analyses: a quantitative tool".  It argues that one should consider the power of the study when deciding on sample size – another concept I’d only seen in quantitative research. 

Both of these sources seem to be conducting qualitative analysis with at least a nod towards some of the benefits of quantitative data, which appears to make qualitative analysis have more rigor.  Of course, simply adding numbers doesn’t necessarily make something more rigorous but it does add more information to results of an analysis and this could influence the reader’s perception of the quality of the research.  However, I don’t recall seeing this is any HCI papers.  Why isn’t it used more often? 

The answer (or at least, my answer) hinges on nuances of research tradition that are not often discussed explicitly, at least in HCI:

Joffe, Fugard and Potts are all thinking and working in a positivist tradition that assumes an independent reality ‘out there’, that doesn’t take into account the role of the individual researcher in making sense of the data. Numbers are great when they are meaningful, but they can hide a lot of important complexity. For example in our study of people’s experience of home haemodialysis, we could report how many of the participants had a carer and how many had a helper. That’s a couple of numbers. But the really interesting understanding comes in how those people (whether trained as a carer or just acting as a helper) work with the patient to manage home haemodialysis, and how that impacts on their sense of being in control, how they stay safe, their experience of being on dialysis, and the implications for the design of both the technology and the broader system of care. Similarly, we could report how many of their participants reported feeling scared in the first weeks of dialysis, but that didn’t get at why they felt scared or how they got through that stage. We could now run a different kind of study to tease out the factors that contribute to people being scared (having established the phenomenon) and put numbers on them, but to get the larger (60-80) participants needed for this kind of analysis would involve scouring the entire country for willing HHD participants and getting permission to conduct the study from every NHS Trust separately; I’d say that’s a very high cost for a low return.

Numbers don’t give you explanatory power and they don’t give you insights into the design of future technology. You need an exploratory study to identify issues; then a quantitative analysis can give the scale of the problem, but it doesn’t give you insight into how to solve the problem. For HCI studies, most people are more interested in understanding the problem for design than in doing the basic science that’s closer to hypothesis testing. Neither is right or wrong, but they have different motivations and philosophical bases. And as Gray and Salzman argued, many years ago, using numbers to compare features that are not strictly comparable – in their case, features of different usability methods when used in practice – is 'damaged' (and potentially damaging).


Wolcott (p.36) quotes a biologist, Paul Weiss, as claiming, “Nobody who followed the scientific method ever discovered anything interesting.” The quantitative approach to thematic analysis doesn’t allow me to answer many of the questions I find interesting, so I’m not going to shift in that direction just to do studies that others consider more rigorous. Understanding the prevalence of phenomena is important, but so is understanding the phenomena, and the techniques you need for understanding aren’t always compatible with those you need for measuring prevalence. Unfortunately!

Friday, 16 May 2014

Let's be pragmatic: one approach to qualitative data analysis

Today, Hanna, one of my MSc students, has been asking interesting questions about doing a qualitative data analysis. Not the theory (there's plenty about that), but the basic practicalities.

I often point people at the Braun & Clarke (2006) paper on thematic analysis: it’s certainly a very good place to start. The Charmaz book on Grounded Theory (GT) is also a great resource about coding and analysis, even if you’re not doing a full GT. And I've written about Semi-Structured Qualitative Studies. For smallish projects (e.g. up to 20 hours of transcripts), computer-based tools such as  Atlas ti, nVivo and Dodoose tend to force the analyst to focus on the tool and on details rather than on themes.

I personally like improvised tools such as coloured pens and lots of notebooks, and/or simple Word files where I can do a first pass of approximate coding (either using the annotation feature or simply in a multi-column table). At that stage, I don’t worry about consistency of codes: I’m just trying to see what’s in the data: what seem to be the common patterns and themes, what are the surprises that might be worth looking at in more detail.

I then do a second pass through all the data looking systematically for the themes that seem most interesting / promising for analysis. At this stage, I usually copy-and-paste relevant chunks of text into a separate document organised according to the themes, without worrying about connections between the themes (just annotating each chunk with which participant it came from so that I don’t completely lose the context for each quotation).

Step 3 is to build a narrative within each of the themes; at this point, I will often realise that there’s other data that also relates to the theme that I hadn’t noticed on the previous passes, so the themes and the narrative get adapted. This requires looking through the data repeatedly, to spot omissions. While doing this, it's really important to look for contradictory evidence, which is generally an indication that the story isn't right: that there are nuances that haven't been captured. Such contradictions force a review of the themes. They may also highlight a need to gather more data to resolve ambiguities.

The fourth step is to develop a meta-narrative that links the themes together into an overall story. At this point, some themes will get ditched; maybe I’ll realise that there’s another theme in the data that should be part of this bigger narrative, so I go back to stage 2, or even stage 1.  Repeat until done!

At some point, you relate the themes to the literature. In some cases, the literature review (or a theory) will have guided all the data gathering and analysis. In other cases, you get to stage 4, realise that someone has already written exactly that paper, utter a few expletives, and review what alternative narratives there might be in your data that are equally well founded but more novel. Usually, it’s somewhere between these extremes.

This sounds ad-hoc, but done properly it’s both exploratory and systematic, and doesn’t have to be constrained by the features of a particular tool.

Sunday, 12 August 2012

The right tool for the job? Qualitative methods in HCI

It's sad to admit it, but my holiday reading has included Carla Willig's (2008) text on qualitative research methods in psychology and Jonathan Smith's (2007) edited collection on the same topic. I particularly enjoyed the chapters by Smith on Interpretive Phenomenological Analysis and by Kathy Charmaz on Grounded Theory in the edited collection. One striking feature of both books is that they have a narrative structure of "here's a method; here are its foundations; this is what it's good for; this is how to apply it". In other words, both seem to take the view that one becomes an expert in using a particular method, then builds a career by defining problems that are amenable to that method.

One of the features of Human–Computer Interaction (HCI) as a discipline is that it is not (with a few notable exceptions) fixated on what methods to apply. It is much more concerned with choosing the right tools for the job at hand, namely some aspect of the design or evaluation of interactive systems that enhance the user experience, productivity, safety or similar. So does it matter whether the method applied is "clean" Grounded Theory (in any of its variants) or "clean" IPA? I would argue not. The problem, though, is that we need better ways of planning qualitative studies in HCI, and then of describing how data was really gathered and what analysis was performed, so that we can better assess the quality, validity and scope of the reported findings.

There's a trade-off to be made between doing studies that can be done well because the method is clear and well-understood and doing studies that are important (e.g. making systems safer) but for which the method is unavoidably messy and improvisational. An important challenge for HCI (which has always adopted and adapted methods from other disciplines that have stronger methodological foundations) is to develop a better set of methods that address the important research challenges of interaction design. These aren't limited to qualitative research methods, but that is certainly one area where it's important to have a better repertoire of techniques that can be applied intelligently and accountably to address exciting problems.

Friday, 1 June 2012

When is a qualitative study a Grounded Theory study?

I recently came across Beki Grinter's blog posts on Grounded Theory. These make great reading.

The term has been used a lot in HCI as a "bumper sticker" for any and every qualitative analysis regardless of whether or not it follows any of the GT recipes closely, and whether or not it results in theory-building. I exaggerate slightly, but not much. As Beki says, GT is about developing theory, not just about doing a "bottom up" qualitative analysis, possibly without even having any particular questions or aims in mind.

Sometimes, the questions do change, as you discover that your initial questions or assumptions about what you might find are wrong. This has happened to us more than once. For example, we conducted a study of London Underground control rooms where the initial aim was to understand the commonalities and contrasts across different control rooms, and what effects these differences had on the work of controllers, and the ways they used the various artefacts in the environment. In practice, we found that the commonalities were much more interesting than the contrasts, and that there were several themes that emerged across all the contexts we studied. The most intriguing was discovering how much the controllers seemed to be playing with a big train set! This links in to the literature on "serious games", a literature that we hadn't even considered when we started the study (so we had to learn about it fast!).

In our experience, there's an interdependent cycle between qualitative data gathering and analysis and pre-existing theory. You start with questions, gather and analyse some data, realise your questions weren't quite right, so modify them (usually to be more interesting!), gather more data, analyse it much more deeply, realise that Theory X almost accounts for your data, see what insights relating your data to Theory X provides, gather yet more data, analyse it further... end up with either some radically new theory or a minor adaptation of Theory X. Or (as in our study of digital libraries deployment) end up using Theory X (in this case, Communities of Practice) to make sense of the situations you've studied.

Many would say that a "clean" GT doesn't draw explicitly on any existing theories, but builds theory from data. In practice, in my experience, you get a richer analysis if you do draw on other theory, but that's not an essential part of GT. The important thing is to be reflective and critical: to use theory to test and shine light on your data, but not to succumb to confirmation bias, where you only notice the data that fits the theory and ignore the rest. Theory is always there to be overturned!