Monday 31 August 2015

The Digital Doctor


I’ve just finished reading The DigitalDoctor by Robert Wachter. It’s published this year, and gives great insight into the US developments in electronic health records, particularly over the past few years: Meaningful Use and the rise of EPIC. The book manages to steer a great course between being personal (about Wachter’s career and the experiences of people around him) and drawing out general themes, albeit from a US perspective. I’d love to see an equivalent book about the UK, but suspect there would be no-one qualified to write it.

The book is simultaneously fantastic and slightly frustrating. I'll deal with the frustrating first: although Wachter claims that a lot of the book is about usability (and indeed there are engaging and powerful examples of poor usability that have resulted in untoward incidents), he seems unaware that there’s an entire discipline devoted to understanding human factors and usability, and that people with that expertise could contribute to the debate: my frustration is not with Wachter, but with the fact that human factors is apparently still so invisible, and there still seems to be an assumption that the only qualification that is needed to be an expert in human factors is to be a human.

The core example (the overdose of a teenage patient with 38.5 times the intended dose of a common antibiotic) is told compellingly from the perspectives of several of the protagonists:

    poor interface design leads to the doctor specifying the dose in mg, but the system defaulting to mg/kg and therefore multiplying the intended dose by the weight of the patient;

    the system issues so many indistinguishable alerts (most very minor) that the staff become habituated to cancelling them without much thought – and one of the reasons for so many alerts is the EHR supplier covering themselves against liability for error;

    the pharmacist who checked the order was overloaded and multitasking, using an overly complicated interface, and trusted the doctor;

    the robot that issued the medication had no ‘common sense’ and did not query the order;

    the nurse who administered the medication was new and didn’t have anyone more senior to quickly check the prescription with, so assumed that all the earlier checks would have caught any error, so the order must be correct;

    the patient was expecting a lot of medication, so didn’t query how much “a lot” ought to be.
This is about design and culture. There is surprisingly little about safer design from the outset (it’s hardly as if “alert fatigue” is a new phenomenon, or as if the user interface design and confusability of units is surprising or new): while those involved in deploying new technology in healthcare should be able to learn from their own mistakes, there’s surely also room for learning from the mistakes (and the expertise!) of others.

The book covers a lot of other territory: from the potential for big data analytics to transform healthcare to the changing role of the patient (and the evolving clinician–patient relationship) and the cultural context within which all the changes are taking place. I hope that Wachter’s concluding optimism is well founded. It’s going to be a long, hard road from here to there that will require a significant cultural shift in healthcare, and across society. This book really brought home to me some of the limitations of “user centred design” in a world that is trying to achieve such transformational change in such a short period of time, with everyone having to just muddle through. This book should be read by everyone involved in the procurement and deployment of new electronic health record systems, and by their patients too... and of course by healthcare policy makers: we can all learn from the successes and struggles of the US health system.

Sunday 30 August 2015

On rigour, numbers and discovery


Recently, I was asked the following:

In talking to my psychology seminar group about their qualitative lab I ended up looking at Helene Joffe’s book chapter on thematic analysis.  She suggests including diagrammatic representations of the themes, together with quantitative data about how many participants mentioned the theme, and it’s subparts.  This appealed to the psychology students because it gives them quantitative data and helped them see how prevalent that theme was within the sample.

And then today I saw another paper “Supporting thinking on sample sizes for thematic analyses: a quantitative tool".  It argues that one should consider the power of the study when deciding on sample size – another concept I’d only seen in quantitative research. 

Both of these sources seem to be conducting qualitative analysis with at least a nod towards some of the benefits of quantitative data, which appears to make qualitative analysis have more rigor.  Of course, simply adding numbers doesn’t necessarily make something more rigorous but it does add more information to results of an analysis and this could influence the reader’s perception of the quality of the research.  However, I don’t recall seeing this is any HCI papers.  Why isn’t it used more often? 

The answer (or at least, my answer) hinges on nuances of research tradition that are not often discussed explicitly, at least in HCI:

Joffe, Fugard and Potts are all thinking and working in a positivist tradition that assumes an independent reality ‘out there’, that doesn’t take into account the role of the individual researcher in making sense of the data. Numbers are great when they are meaningful, but they can hide a lot of important complexity. For example in our study of people’s experience of home haemodialysis, we could report how many of the participants had a carer and how many had a helper. That’s a couple of numbers. But the really interesting understanding comes in how those people (whether trained as a carer or just acting as a helper) work with the patient to manage home haemodialysis, and how that impacts on their sense of being in control, how they stay safe, their experience of being on dialysis, and the implications for the design of both the technology and the broader system of care. Similarly, we could report how many of their participants reported feeling scared in the first weeks of dialysis, but that didn’t get at why they felt scared or how they got through that stage. We could now run a different kind of study to tease out the factors that contribute to people being scared (having established the phenomenon) and put numbers on them, but to get the larger (60-80) participants needed for this kind of analysis would involve scouring the entire country for willing HHD participants and getting permission to conduct the study from every NHS Trust separately; I’d say that’s a very high cost for a low return.

Numbers don’t give you explanatory power and they don’t give you insights into the design of future technology. You need an exploratory study to identify issues; then a quantitative analysis can give the scale of the problem, but it doesn’t give you insight into how to solve the problem. For HCI studies, most people are more interested in understanding the problem for design than in doing the basic science that’s closer to hypothesis testing. Neither is right or wrong, but they have different motivations and philosophical bases. And as Gray and Salzman argued, many years ago, using numbers to compare features that are not strictly comparable – in their case, features of different usability methods when used in practice – is 'damaged' (and potentially damaging).


Wolcott (p.36) quotes a biologist, Paul Weiss, as claiming, “Nobody who followed the scientific method ever discovered anything interesting.” The quantitative approach to thematic analysis doesn’t allow me to answer many of the questions I find interesting, so I’m not going to shift in that direction just to do studies that others consider more rigorous. Understanding the prevalence of phenomena is important, but so is understanding the phenomena, and the techniques you need for understanding aren’t always compatible with those you need for measuring prevalence. Unfortunately!

Saturday 22 August 2015

Innovation for innovation's sake?

As Director of the UCL Institute of Digital Health, my job is to envision the future. The future is fueled by innovation and vision. And there's plenty of that around. But the reality is much more challenging: as summarised in a recent blog post, most people aren't that interested in engaging with their health data (the ones who are most likely to be tracking their data are young, fit and wealthy), and most clinicians are struggling to even do their basic (reactive) jobs, without having much chance to think about the preventative (proactive) steps they might be taking to help people manage their health.

Why might this be? Innovation is creative and fun. It's also essential (without it, we'd still be wallowing around in the primordial soup). But there's a tendency for innovation to assume a world that is simpler than the real world: people who are engaged and compliant and have time to take up the innovation. Innovation tends not to engage with the inconvenient truths of real life, or to tackle the difficult and complex challenges that get in the way of simple visions.

We need a new approach to innovation: one that takes the really difficult challenges seriously, that accepts that the rate of progress may be slow, that recognises that it's much harder to change people and cultural practices than it is to change technology, but that these all need to be aligned for innovation to really work.

We need innovation that works with and for people. And we need to recognise that an important part of innovation is dealing with the inconvenient and difficult problems that seem to beset healthcare delivery, in all its forms.