Recently, I was asked the following:
In talking to my psychology seminar group about their
qualitative lab I ended up looking at Helene Joffe’s book chapter on thematic
analysis. She suggests including diagrammatic representations of the
themes, together with quantitative data about how many participants mentioned
the theme, and it’s subparts. This appealed to the psychology students
because it gives them quantitative data and helped them see how prevalent that
theme was within the sample.
And then today I saw another paper “Supporting thinking on sample sizes for thematic analyses: a quantitative tool".
It argues that one should consider the power of the study when deciding on
sample size – another concept I’d only seen in quantitative research.
Both of these sources seem to be conducting qualitative
analysis with at least a nod towards some of the benefits of quantitative data,
which appears to make qualitative analysis have more rigor. Of course,
simply adding numbers doesn’t necessarily make something more rigorous but it
does add more information to results of an analysis and this could influence
the reader’s perception of the quality of the research. However, I don’t
recall seeing this is any HCI papers. Why isn’t it used more often?
The answer (or at least, my answer) hinges
on nuances of research tradition that are not often discussed explicitly, at
least in HCI:
Joffe, Fugard and Potts are all thinking
and working in a positivist tradition that assumes an independent reality ‘out
there’, that doesn’t take into account the role of the individual researcher in
making sense of the data. Numbers are great when they are meaningful, but they can
hide a lot of important complexity. For example in our study of people’s experience of home haemodialysis, we could report how many of the participants
had a carer and how many had a helper. That’s a couple of numbers. But the
really interesting understanding comes in how those people (whether trained as
a carer or just acting as a helper) work with the patient to manage home
haemodialysis, and how that impacts on their sense of being in control, how
they stay safe, their experience of being on dialysis, and the implications for
the design of both the technology and the broader system of care. Similarly, we
could report how many of their participants reported feeling scared in the
first weeks of dialysis, but that didn’t get at why they felt scared or how
they got through that stage. We could now run a different kind of study to
tease out the factors that contribute to people being scared (having established
the phenomenon) and put numbers on them, but to get the larger (60-80)
participants needed for this kind of analysis would involve scouring the entire
country for willing HHD participants and getting permission to conduct the
study from every NHS Trust separately; I’d say that’s a very high cost for a
low return.
Numbers don’t give you explanatory power
and they don’t give you insights into the design of future technology. You need
an exploratory study to identify issues; then a quantitative analysis can give
the scale of the problem, but it doesn’t give you insight into how to solve the
problem. For HCI studies, most people are more interested in understanding the
problem for design than in doing the basic science that’s closer to hypothesis
testing. Neither is right or wrong, but they have different motivations and
philosophical bases. And as Gray and Salzman argued, many years ago, using numbers to compare features that are not strictly comparable – in their case, features of different usability methods when used in practice – is 'damaged' (and potentially damaging).
Wolcott (p.36) quotes a biologist, Paul Weiss, as
claiming, “Nobody who followed the scientific method ever discovered
anything interesting.” The quantitative approach to thematic
analysis doesn’t allow me to answer many of the questions I find interesting,
so I’m not going to shift in that direction just to do studies that others
consider more rigorous. Understanding the prevalence of phenomena is important,
but so is understanding the phenomena, and the techniques you need for
understanding aren’t always compatible with those you need for measuring
prevalence. Unfortunately!
Thanks for your interesting evaluation of our work. I'd be happy to give a seminar talk to explain how the approach actually works. It is not "positivist". We spend some time discussing assumptions, e.g.,
ReplyDelete"A quantitative model for a qualitative approach strikes some as inherently misguided. However, tensions between quantitative and qualitative methods can reflect more on academic politics than on epistemology. Qualitative approaches are generally associated with an interpretivist position, and quantitative approaches with a positivist one, but the methods are not uniquely tied to the epistemologies. An interpretivist need not eschew all numbers, and positivists can and do carry out qualitative studies (Lin, 1998). ‘Quantitative’ need not mean ‘objective’. Subjective approaches to statistics, for instance Bayesian approaches, assume that probabilities are mental constructions and do not exist independently of minds (De Finetti, 1989). Statistical models are seen as inhabiting a theoretical world which is separate to the ‘real’ world though related to it in some way (Kass, 2011). Physics, often seen as the shining beacon of quantitative science, has important examples of qualitative demonstrations in its history that were crucial to the development of theory (Kuhn, 1961)."