Showing posts with label SSQS. Show all posts
Showing posts with label SSQS. Show all posts

Sunday, 30 August 2015

On rigour, numbers and discovery


Recently, I was asked the following:

In talking to my psychology seminar group about their qualitative lab I ended up looking at Helene Joffe’s book chapter on thematic analysis.  She suggests including diagrammatic representations of the themes, together with quantitative data about how many participants mentioned the theme, and it’s subparts.  This appealed to the psychology students because it gives them quantitative data and helped them see how prevalent that theme was within the sample.

And then today I saw another paper “Supporting thinking on sample sizes for thematic analyses: a quantitative tool".  It argues that one should consider the power of the study when deciding on sample size – another concept I’d only seen in quantitative research. 

Both of these sources seem to be conducting qualitative analysis with at least a nod towards some of the benefits of quantitative data, which appears to make qualitative analysis have more rigor.  Of course, simply adding numbers doesn’t necessarily make something more rigorous but it does add more information to results of an analysis and this could influence the reader’s perception of the quality of the research.  However, I don’t recall seeing this is any HCI papers.  Why isn’t it used more often? 

The answer (or at least, my answer) hinges on nuances of research tradition that are not often discussed explicitly, at least in HCI:

Joffe, Fugard and Potts are all thinking and working in a positivist tradition that assumes an independent reality ‘out there’, that doesn’t take into account the role of the individual researcher in making sense of the data. Numbers are great when they are meaningful, but they can hide a lot of important complexity. For example in our study of people’s experience of home haemodialysis, we could report how many of the participants had a carer and how many had a helper. That’s a couple of numbers. But the really interesting understanding comes in how those people (whether trained as a carer or just acting as a helper) work with the patient to manage home haemodialysis, and how that impacts on their sense of being in control, how they stay safe, their experience of being on dialysis, and the implications for the design of both the technology and the broader system of care. Similarly, we could report how many of their participants reported feeling scared in the first weeks of dialysis, but that didn’t get at why they felt scared or how they got through that stage. We could now run a different kind of study to tease out the factors that contribute to people being scared (having established the phenomenon) and put numbers on them, but to get the larger (60-80) participants needed for this kind of analysis would involve scouring the entire country for willing HHD participants and getting permission to conduct the study from every NHS Trust separately; I’d say that’s a very high cost for a low return.

Numbers don’t give you explanatory power and they don’t give you insights into the design of future technology. You need an exploratory study to identify issues; then a quantitative analysis can give the scale of the problem, but it doesn’t give you insight into how to solve the problem. For HCI studies, most people are more interested in understanding the problem for design than in doing the basic science that’s closer to hypothesis testing. Neither is right or wrong, but they have different motivations and philosophical bases. And as Gray and Salzman argued, many years ago, using numbers to compare features that are not strictly comparable – in their case, features of different usability methods when used in practice – is 'damaged' (and potentially damaging).


Wolcott (p.36) quotes a biologist, Paul Weiss, as claiming, “Nobody who followed the scientific method ever discovered anything interesting.” The quantitative approach to thematic analysis doesn’t allow me to answer many of the questions I find interesting, so I’m not going to shift in that direction just to do studies that others consider more rigorous. Understanding the prevalence of phenomena is important, but so is understanding the phenomena, and the techniques you need for understanding aren’t always compatible with those you need for measuring prevalence. Unfortunately!

Friday, 16 May 2014

Let's be pragmatic: one approach to qualitative data analysis

Today, Hanna, one of my MSc students, has been asking interesting questions about doing a qualitative data analysis. Not the theory (there's plenty about that), but the basic practicalities.

I often point people at the Braun & Clarke (2006) paper on thematic analysis: it’s certainly a very good place to start. The Charmaz book on Grounded Theory (GT) is also a great resource about coding and analysis, even if you’re not doing a full GT. And I've written about Semi-Structured Qualitative Studies. For smallish projects (e.g. up to 20 hours of transcripts), computer-based tools such as  Atlas ti, nVivo and Dodoose tend to force the analyst to focus on the tool and on details rather than on themes.

I personally like improvised tools such as coloured pens and lots of notebooks, and/or simple Word files where I can do a first pass of approximate coding (either using the annotation feature or simply in a multi-column table). At that stage, I don’t worry about consistency of codes: I’m just trying to see what’s in the data: what seem to be the common patterns and themes, what are the surprises that might be worth looking at in more detail.

I then do a second pass through all the data looking systematically for the themes that seem most interesting / promising for analysis. At this stage, I usually copy-and-paste relevant chunks of text into a separate document organised according to the themes, without worrying about connections between the themes (just annotating each chunk with which participant it came from so that I don’t completely lose the context for each quotation).

Step 3 is to build a narrative within each of the themes; at this point, I will often realise that there’s other data that also relates to the theme that I hadn’t noticed on the previous passes, so the themes and the narrative get adapted. This requires looking through the data repeatedly, to spot omissions. While doing this, it's really important to look for contradictory evidence, which is generally an indication that the story isn't right: that there are nuances that haven't been captured. Such contradictions force a review of the themes. They may also highlight a need to gather more data to resolve ambiguities.

The fourth step is to develop a meta-narrative that links the themes together into an overall story. At this point, some themes will get ditched; maybe I’ll realise that there’s another theme in the data that should be part of this bigger narrative, so I go back to stage 2, or even stage 1.  Repeat until done!

At some point, you relate the themes to the literature. In some cases, the literature review (or a theory) will have guided all the data gathering and analysis. In other cases, you get to stage 4, realise that someone has already written exactly that paper, utter a few expletives, and review what alternative narratives there might be in your data that are equally well founded but more novel. Usually, it’s somewhere between these extremes.

This sounds ad-hoc, but done properly it’s both exploratory and systematic, and doesn’t have to be constrained by the features of a particular tool.

Tuesday, 20 August 2013

Hidden in full view: the daft things you overlook when designing and conducting studies

Several years ago, when Anne Adams and I were studying how people engaged with health information, we came up with the notion of an "information journey", with three main stages: recognising an information need; gathering information and interpreting that information. The important point (to us) in that work was highlighting the important of interpretation: the dominant view of information seeking at that time was that if people could find information then that was job done. But we found that an important role for clinicians is in helping lay people to interpret clinical information in terms of what it means for that individual – hence our focus on interpretation.

In later studies of lawyers' information work, Simon Attfield  and I realised that there were two important elements missing from the information journey as we'd formulated it: information validation and information use. When we looked back at the health data, we didn't see a lot of evidence of validation (it might have been there, but it was largely implicit, and rolled up with interpretation) but – now sensitised to it – we found lots of evidence of information use. Doh! Of course people use the information – e.g. in subsequent health management – but we simply hadn't noticed it because people didn't talk explicitly about it as "using" the information. Extend the model.

Wind forwards to today, and I'm writing a chapter for InteractionDesign.org on semi-structured qualitative studies. Don't hold your breath on this appearing: it's taking longer than I'd expected.

I've (partly) structured it according to the PRETAR framework for planning and conducting studies:
  • what's the Purpose of the study?
  • what Resources are available?
  • what Ethical considerations need to be taken into account?
  • what Techniques for data gathering?
  • how to Analyse data?
  • how to Report results?
...and, having been working with that framework for several years now, I have just realised that there's an important element missing, somewhere between resources and techniques for data gathering. What's missing is the step of taking the resources (which define what is possible) and using them to shape the detailed design of the study – e.g., in terms of interventions.

I've tended to lump the details of participant recruitment in with Resources (even though it's really part of the detailed study design), and of informed consent in with Ethics. But what about interventions such as giving people specific tasks to do for a think-aloud study? Or giving people a new device to use? Or planning the details of a semi-structured interview script? Just because a resource is available, that doesn't mean it's automatically going to be used in the study, and all those decisions – which of course get made in designing a study – precede data gathering. I don't think this means a total re-write of the chapter, but a certain amount of cutting and pasting is about to happen ...