Showing posts with label human error. Show all posts
Showing posts with label human error. Show all posts

Monday, 31 August 2015

The Digital Doctor


I’ve just finished reading The DigitalDoctor by Robert Wachter. It’s published this year, and gives great insight into the US developments in electronic health records, particularly over the past few years: Meaningful Use and the rise of EPIC. The book manages to steer a great course between being personal (about Wachter’s career and the experiences of people around him) and drawing out general themes, albeit from a US perspective. I’d love to see an equivalent book about the UK, but suspect there would be no-one qualified to write it.

The book is simultaneously fantastic and slightly frustrating. I'll deal with the frustrating first: although Wachter claims that a lot of the book is about usability (and indeed there are engaging and powerful examples of poor usability that have resulted in untoward incidents), he seems unaware that there’s an entire discipline devoted to understanding human factors and usability, and that people with that expertise could contribute to the debate: my frustration is not with Wachter, but with the fact that human factors is apparently still so invisible, and there still seems to be an assumption that the only qualification that is needed to be an expert in human factors is to be a human.

The core example (the overdose of a teenage patient with 38.5 times the intended dose of a common antibiotic) is told compellingly from the perspectives of several of the protagonists:

    poor interface design leads to the doctor specifying the dose in mg, but the system defaulting to mg/kg and therefore multiplying the intended dose by the weight of the patient;

    the system issues so many indistinguishable alerts (most very minor) that the staff become habituated to cancelling them without much thought – and one of the reasons for so many alerts is the EHR supplier covering themselves against liability for error;

    the pharmacist who checked the order was overloaded and multitasking, using an overly complicated interface, and trusted the doctor;

    the robot that issued the medication had no ‘common sense’ and did not query the order;

    the nurse who administered the medication was new and didn’t have anyone more senior to quickly check the prescription with, so assumed that all the earlier checks would have caught any error, so the order must be correct;

    the patient was expecting a lot of medication, so didn’t query how much “a lot” ought to be.
This is about design and culture. There is surprisingly little about safer design from the outset (it’s hardly as if “alert fatigue” is a new phenomenon, or as if the user interface design and confusability of units is surprising or new): while those involved in deploying new technology in healthcare should be able to learn from their own mistakes, there’s surely also room for learning from the mistakes (and the expertise!) of others.

The book covers a lot of other territory: from the potential for big data analytics to transform healthcare to the changing role of the patient (and the evolving clinician–patient relationship) and the cultural context within which all the changes are taking place. I hope that Wachter’s concluding optimism is well founded. It’s going to be a long, hard road from here to there that will require a significant cultural shift in healthcare, and across society. This book really brought home to me some of the limitations of “user centred design” in a world that is trying to achieve such transformational change in such a short period of time, with everyone having to just muddle through. This book should be read by everyone involved in the procurement and deployment of new electronic health record systems, and by their patients too... and of course by healthcare policy makers: we can all learn from the successes and struggles of the US health system.

Thursday, 18 July 2013

When reasoning and action don't match: Intentionality and safety

My team have been discussing the nature of “resilient” behavior, the basic idea being that people develop strategies for anticipating and avoiding possible errors, and creating conditions that enable them to recover seamlessly from disturbances. One of the examples that is used repeatedly is leaving one’s umbrella by the door as a reminder to take it when going out in case of rain. Of course, getting wet doesn’t seriously compromise safety for most people, but let’s let that pass: its unpleasant. This presupposes that people are able to recognize vulnerabilities and identify appropriate strategies to address them. Two recent incidents have made me rethink some of the presuppositions.

On Tuesday, I met up with a friend. She had left her wallet at work. It had been such a hot day that she had taken it out of her back pocket and put it somewhere safe (which was, of course, well hidden). She recognized that she was likely to forget it, and thought of ways to remind herself: leaving a note with her car keys, for instance. But she didn’t act on this intention. So she had done the learning and reflection, but it still didn’t work for her because she didn’t follow through with action.

My partner occasionally forgets to lock the retractable roof on our car. I have never made this mistake, but wasn’t sure why until I compared his behavior with mine. It turns out he is more relaxed than I am, and waits while the roof closes before taking the next step, which is often to close the windows, take the keys out of the lock and get out of the car. I, in contrast, am impatient. I can’t wait to lock the roof as it closes, so as the roof is coming over, my arm is going up ready to lock it. So I never forget (famous last words!): the action is automatised. The important point in relation to resilience is that I didn’t develop this behavior in order to keep the car safe or secure: I developed it because I assumed that the roof needed to be secured and I wanted it to happen as quickly as possible. So it is not intentional, in terms of safety, and yet it has the effect of making the system safer.

So what keeps the system safe(r) is not necessarily what people learn or reflect on, but what they act on. This is, of course, only one aspect of the problem; when major disturbances happen, it’s almost certainly more important to consider people’s competencies and knowledge (and how they acquired them). To (approximately) quote a London Underground controller: “We’re paid for what we know, not what we do”. Ultimately, it's what people do that matters in terms of safety; sometimes that can be clearly traced to what they know and sometime it can't.


Saturday, 18 May 2013

When is a medical error a crime?

I've recently had Collateral Damage recommended to me. I'm afraid I can't face reading it: just the summary is enough. Having visited Johns Hopkins, and in particular the Armstrong Institute for Patient Safety, a couple of months ago, I'm pretty confident that the terrible experience of the Walter family isn't universal, even within that one hospital, never mind nationally or internationally. And therein lies a big challenge: that there is such a wide spectrum of experiences and practices in healthcare that it's very difficult to generalise.

There are clearly challenges:
  • the demands of doing science and of providing the best quality patient care may pull in opposing directions: if we never try new things, relying on what is already known as best practice, we may not make discoveries that actually transform care.
  • if clinicians are not involved in the design of future medical technologies then how can those technologies be well designed to support clinical practice? But if clinicians are involved in their design, and have a stake in their commercial success, how can they remain objective in their assessments of clinical effectiveness?
There are no easy answers to such challenges, but clearly they are cultural and societal challenges as well as being challenges for the individual clinician. They are about what a society values and what behaviours are acceptable and/or rewarded, whether through professional recognition or financially.

I know that I have a tendency to view things positively, to argue for a learning culture rather than a blame culture. Accounts like "Collateral Damage" might force one to question that position as being naive in the extreme. For me, though, the question is: what can society and the medical establishment learn from such an account? That's not an easy question to answer. Progress in changing healthcare culture is almost imperceptibly slow: reports such as "to err is human" and "an organisation with a memory", both published over a decade ago (and the UK report now officially 'archived'), haven't had much perceptible effect. Consider, for example, the recent inquiry into failings in Mid Staffordshire.

Bob Wachter poses the question "when is a medical error a crime?". He focuses on the idea of a 'just culture': that there is a spectrum of behaviours, from the kinds of errors that anyone could make (and for which learning is a much more constructive response than blaming), through 'at risk' behaviours to 'reckless' behaviours where major risks are knowingly ignored.

The Just Culture Community notes that "an organisation's mission defines its reason for being". From a patient's perspective, a hospital's "reason for being" is to provide the best possible healthcare when needed. Problems arise when the hospital's mission is "to generate a profit", to "advance science", or any other mission that might be at odds with providing the best possible care in the short term. The same applies to individual clinicians and clinical teams within the hospital.

I find the idea of a "just culture" compelling. It is not a simple agenda, because it involves balancing learning with blame, giving a sophisticated notion of accountability. It clearly places the onus for ensuring safety at an organisational / cultural level, within which the individual works, interacts and is accountable. But it does presuppose that the different people or groups broadly agree on the mission or values of healthcare. 'Collateral Damage' forces one to question whether that assumption is correct. It is surely a call for reflection and learning: what should the mission of any healthcare provider be? How is that mission agreed on by both providers and consumers? How are values propagated across stakeholders? Etc. Assuming that patient safety is indeed valued, we all need to learn from cases such as this.

Wednesday, 12 September 2012

The Hillsborough report 23 years on

I'm listening right now to the news report on the review of the Hillsborough disaster from 23 years. ago. I have heard terms including "betrayed", "dreadful mistakes were made", "lies" and "shift blame" (all BBC News at Ten). There is talk of "cover up", and people not admitting to mistakes made.

Families of the victims seem to be saying that they were never looking for compensation but that they wanted to be heard, and they want to know the truth. Being heard seems to be so important; if we do not hear then we do not learn; if we do not learn then we cannot change practices for the better. Maybe for some compensation is important, but for many others all that matters is that the tragedy should not have been in vain.

Earlier today, in a different context, a colleague was arguing that we need people to be "accountable" for their actions and decisions, that people need to be punished for mistakes. But we all make mistakes, repeatedly and often amusingly; for example, this evening, I phoned one daughter thinking I was phoning the other one, and because I was so sure I knew who I was talking to, and because we have a lot of "common ground", it took us both a while to realise my error. We could both laugh about it. Errordiary documents lots of equally amusing mistakes. But occasionally, mistakes have unfortunate consequences. Hillsborough is a stark reminder of this. Does unfortunate consequences automatically mean that the people who made mistakes should be punished for them? Surely covering up mistakes is even more serious than making errors in the first place. How much could we have learned (and how much easier would it have been for families to have recovered) if those responsible had not covered up and avoided being accountable? Here, I want to use the term "accountable" in a much more positive sense, meaning that they were able to account for the decisions that they made, based on the information and goals that they had at the time.

Being accountable currently seems to be about assigning blame; maybe this is sometimes appropriate – particularly if the individual or organisation in question has not learned from previous analogous incidents. But maybe sometimes learning from mistakes is of more long term value than punishing people for them. That implies a different understanding of "accountable". We need to find a better balance between blame and learning. Unless I am much mistaken.

Wednesday, 4 July 2012

An accident: lots of factors, no blame

At one level, this is a story that has been told many times already, and yet this particular rendering of it is haunting me. I don't know all the details (and never will), so parts of the following are speculation, but the story is my best understanding of what happened, and it highlights some of the challenges in trying to make sense of human error and system design.

The air ambulance made a tricky descent. Although the incident took place near a local hospital, the casualty was badly injured and needed specialist treatment, so was flown to a major trauma centre. Hopefully, he will live.

What happened? The man fell, probably about 10 metres, as he was being lowered from the top of a climbing wall. It seems that he had put his climbing harness on backwards and tied the rope on to a gear loop (which is not designed to hold much weight) rather than tying it in correctly (through the waist loop and leg loop, which were behind him). Apparently, as he let the rope take his weight to be lowered off from the climb, the gear loop gave way.

I can only guess that both the climber and his partner were new to climbing, since apparently neither of them knew how to put the harness on correctly, and also that there was no-one else on the wall at the time (since climbers generally look out for each other and point out unsafe practices). But so many things must have aligned for the accident to happen: both climbers must have signed a declaration that they were experienced and recognised the risks; the harness in question had a gear loop at the centre of the back that they could mistake for a rope attachment point... but that loop wasn't strong enough to take the climber's weight; someone had supplied that harness to the climber without either providing clear instructions on how to put it on or checking that he knew...

So many factors: the climber and his partner apparently believed they were more expert than they actually were; the harness supplier (whether that was a vendor or a friend) didn't check that the climber knew how to use the equipment; there weren't other more expert climbers around to notice the error; the design of the harness had a usability vulnerability (a central loop that actually wasn't rated for a high load and could be mistaken for a rope attachment point); the wall's policy allowed people to self-certify as experienced without checking. Was anyone to blame? Surely not: this wasn't "an accident waiting to happen". But the system clearly wasn't as resilient as it might have been because when all these factors lined up, a young man had to be airlifted to hospital. I wish him well, and hope he makes a full recovery.

The wall has learnt from the incident and changed its admissions policy; hopefully, there will be other learning from it too to further reduce the likelihood of any similar incident occurring in the future. Safety is improved through learning, not through blaming.

Tuesday, 1 May 2012

Seeing is believing?

In a recent interview, Mary Beard recounted a Roman joke: "A guy meets another in the street and says: 'I thought you were dead.' The bloke says: 'Can't you see I'm alive?' The first replies: 'But the person who told me you were dead is more reliable than you.'" She used the joke (apparently considered hilarious all those centuries ago) to illustrate a point about changing cultures and the nature of evidence. But the question of evidence is just as important in our work today. When are verbal reports a reliable form of evidence, and when do you need more direct forms of evidence? What can you learn from web analytics or the device log of an infusion pump? What does observing people tell you, as against interviewing them? Etc.

In general, device logs of any kind should tell you what happened, over a large number of instances, but they can't tell you anything much about the circumstances or the causes (what people thought they were doing, or what context they were in). So they give you an idea of where problems might lie, but not really what those problems are; they give quantity, but not necessarily quality.

Conversely, interviews and observations can potentially give quality, but not quantity. They have greater explanatory power; interviews are good for finding out people's perceptions (e.g. of why they behave in certain ways), and observations will give insights into the contexts within which people do things and the circumstances surrounding actions. Interviews may overlook details that people consider unremarkable, while observations may catch those details but not explain them. And of course the questions that are asked or the way an observational study is conducted will determine what data is gathered.

As I type this, most of it seems very self-evident, and yet people often seem to choose inappropriate data gathering methods that don't reliably answer the questions posed. I'll use an example from a researcher I have great respect for, and who is undeniably a leader in the field: Ever since I first read it, I have been perplexed by Jim Reason's analysis of photocopier errors – not because it is inconsistent with other studies, but because it is based entirely on retrospective self-reports. But our memories of past events are highly selective. I make errors every day, as we all do (see errordiary for both mundane and bizarre examples), but the ones I can recall later are the ones that were most embarrassing, most costly. most amusing or otherwise memorable. So what confidence can we have in retrospective reports as a way of measuring error? I don't know. And I don't think that's an admission of failure on my part; it's a recognition that retrospective self-report is an unreliable way of gathering data about human error. And that remains a challenge: to match research questions and data gathering and analysis methods appropriately.

Saturday, 24 March 2012

"Be prepared"

We're thinking a lot about resilience at the moment (what it is, what it is not, how it is useful for thinking about design and training). A couple of years ago, I went climbing on Lundy. Beautiful place, highly recommended, though prone to being wet. Lundy doesn't have a climbing equipment shop, so it's important that you have everything with you. And because most of the climbing is on sea cliffs, if you drop anything you're unlikely to be able to retrieve it. So take spares: that's recognising a generic vulnerability, and planning a generic solution. In particular, I had the foresight to take a spare belay plate (essential for keeping your partner safe while climbing). This is an anticipatory approach to resilience for the "known unknowns": first recognise a vulnerability, and then act to reduce the vulnerability.

It happened: when I was half way up the Devil's Slide, my partner pulled the rope hard just as I was removing it from the belay plate, and I lost my grip... and watched the belay plate bounce down the rock to a watery grave in the sea 30m below. That's OK: I had a spare. Except that I didn't: the spare was in my rucksack at the top of the cliff. Fortunately, though, I had knowledge: I knew how to belay using an Italian Hitch knot, so I could improvise with other equipment I was carrying and keep us safe for the rest of the climb. This is a different kind of resilience: having a repertoire of skills that can be brought to bear in unforeseen circumstances, and having generic tools (like bits of string, penknives, and the like) that can be appropriated to fit unexpected needs.

This is a "boy scout" approach to resilience: for the "unknown unknowns" that cannot be anticipated, it's a case of having skills that can be brought to bear to deal with the unforeseen situation, and tools that can be used in ways that they might not have been designed for.

Thursday, 15 March 2012

Undies in the safe

Some time ago, I went to a conference in Konstanz. I put a few item in the room safe (££, passport, etc.)... and forgot to remove them when I checked out. Oops! Rather inconvenient!

This week, I've been in Stuttgart. How to make use of the room safe while also being sure to remember those important items when I check out? Solution: put my clean underwear for the last day in the safe with the higher-value items. No room thief would be interested in the undies, but I'm not going to leave without them, am I? That worked! It's an example of what we're currently calling a "resilient strategy": we're not sure that that's the right term, so if you (the reader) have better ideas, do let me know. Whatever the word, the important idea is that I anticipated a vulnerability to forgetting (drawing on the analogy of a similar incident) and formulated a way of reducing the likelihood of forgetting, by co-locating the forgettable items with some unforgettable ones.

The strategy worked even better than expected, though, because I told some people about what I'd done (to illustrate a point about resilience) while at the conference. And on my last evening, I was in the lift with another attendee. His parting words were: "don't forget your knickers!" In other situations, that could have been embarrassing; in the context, it raised some smiles... and acted as a further external memory aid to ensure that I remembered not just my clothing, but also the passport and sterling cash that I'd been storing in the safe. Other people engaging with a problem can make the system so much more resilient too!

Saturday, 10 March 2012

Attitudes to error in healthcare: when will we learn?

In a recent pair of radio programmes, James Reason discusses the possibility of a change in attitude in the UK National Health Service regarding human error and patient safety. The first programme focuses on experiences in the US, where some hospitals have shifted their approach towards open disclosure, being very open about incidents with the affected patients and their families. It shouldn't really be a surprise that this has reduced litigation and the size of payouts, as families feel more listened to and recognise that their bad experience has at least had some good outcome in terms of learning, to reduce the likelihood of such an error happening again.

The second programme focuses more on the UK National Health Service, on the "duty of candour" and "mandatory disclosure", and the idea of an open relationship between healthcare professional and patients. It discusses the fact that the traditional secrecy and cover-ups lead to "secondary trauma", in which patients' families suffer from the silence and the frustration of not being able to get to the truth. There is of course also a negative effect on doctors and nurses who suffer the guilt of harming someone who had put their trust in them. It wasn't mentioned in the programme, but the suicide of Kim Hiatt is a case in point.

A shift in attitude requires a huge cultural shift. There is local learning (e.g. by an individual clinician or a clinical team) that probably takes effect even without disclosure, provided that there is a chance to reflect on the incident. But to have a broader impact, the learning needs to be disseminated more widely. This should lead to changes in practice, and also to changes in the design of technology and protocols for delivering clinical care. This requires incident reporting mechanisms that are open, thorough and clear. Rather than focusing on who is "responsible" (with a subtext that that individual is to blame), or on how to "manage" an incident (e.g. in terms of how it gets reported by the media), we will only make real progress on patient safety by emphasising learning. Reports of incidents that lay blame (e.g. the report on an unfortunate incident in which a baby received an overdose) will hardly encourage greater disclosure: if you fear blame then the natural reaction is to clam up. Conversely, though, if you clam up then that tends to encourage others to blame: it becomes a vicious cycle.

As I've argued in a recent CS4FN article, we need a changed attitude to reporting incidents that recognises the value of reporting for learning. We also need incident reporting mechanisms that are open and effective: that contain enough detail to facilitate learning (without compromising patient or clinician confidentiality), and that are available to view and to search, so that others can learn from every unfortunate error. It's not true that every cloud has a silver lining, but if learning is effective then it can be the silver lining in the cloud of each unfortunate incident.