Saturday, 18 May 2013

When is a medical error a crime?

I've recently had Collateral Damage recommended to me. I'm afraid I can't face reading it: just the summary is enough. Having visited Johns Hopkins, and in particular the Armstrong Institute for Patient Safety, a couple of months ago, I'm pretty confident that the terrible experience of the Walter family isn't universal, even within that one hospital, never mind nationally or internationally. And therein lies a big challenge: that there is such a wide spectrum of experiences and practices in healthcare that it's very difficult to generalise.

There are clearly challenges:
  • the demands of doing science and of providing the best quality patient care may pull in opposing directions: if we never try new things, relying on what is already known as best practice, we may not make discoveries that actually transform care.
  • if clinicians are not involved in the design of future medical technologies then how can those technologies be well designed to support clinical practice? But if clinicians are involved in their design, and have a stake in their commercial success, how can they remain objective in their assessments of clinical effectiveness?
There are no easy answers to such challenges, but clearly they are cultural and societal challenges as well as being challenges for the individual clinician. They are about what a society values and what behaviours are acceptable and/or rewarded, whether through professional recognition or financially.

I know that I have a tendency to view things positively, to argue for a learning culture rather than a blame culture. Accounts like "Collateral Damage" might force one to question that position as being naive in the extreme. For me, though, the question is: what can society and the medical establishment learn from such an account? That's not an easy question to answer. Progress in changing healthcare culture is almost imperceptibly slow: reports such as "to err is human" and "an organisation with a memory", both published over a decade ago (and the UK report now officially 'archived'), haven't had much perceptible effect. Consider, for example, the recent inquiry into failings in Mid Staffordshire.

Bob Wachter poses the question "when is a medical error a crime?". He focuses on the idea of a 'just culture': that there is a spectrum of behaviours, from the kinds of errors that anyone could make (and for which learning is a much more constructive response than blaming), through 'at risk' behaviours to 'reckless' behaviours where major risks are knowingly ignored.

The Just Culture Community notes that "an organisation's mission defines its reason for being". From a patient's perspective, a hospital's "reason for being" is to provide the best possible healthcare when needed. Problems arise when the hospital's mission is "to generate a profit", to "advance science", or any other mission that might be at odds with providing the best possible care in the short term. The same applies to individual clinicians and clinical teams within the hospital.

I find the idea of a "just culture" compelling. It is not a simple agenda, because it involves balancing learning with blame, giving a sophisticated notion of accountability. It clearly places the onus for ensuring safety at an organisational / cultural level, within which the individual works, interacts and is accountable. But it does presuppose that the different people or groups broadly agree on the mission or values of healthcare. 'Collateral Damage' forces one to question whether that assumption is correct. It is surely a call for reflection and learning: what should the mission of any healthcare provider be? How is that mission agreed on by both providers and consumers? How are values propagated across stakeholders? Etc. Assuming that patient safety is indeed valued, we all need to learn from cases such as this.

Coping with complexity in home hemodialysis

We've just had a paper published on how people who need to do hemodialysis at home manage the activity. Well done to Atish, the lead author.

People doing home hemodialysis are a small proportion of the people who need hemodialysis overall: the majority have to travel to a specialist unit for their care. Those doing home care have to take responsibility for a complex care regime. In this paper, we focus on how people use time as a resource to help with managing care. Strategies include planning to perform actions at particular times (so that time acts as a cue to perform an action); allowing extra time to deal with any problems that might arise; building in time for reflection into a plan (to minimise the risks of forgetting steps); and organising tasks to minimise the number of things that need to be thought about or done at any one time (minimising peak complexity). There is a tendency to think about complex activities in terms of task sequences, and to ignore the details of the time frame in which people carry out tasks, and how time (and our experience of time) can be used as a resource as well as, conversely, placing demands on us (e.g. through deadlines).

This study focused on particular (complex and safety-critical) activity that has to be performed repeatedly (every day or two) by people who may not be clinicians but who become experts in the task. We all do frequent tasks, whether that's preparing a meal or getting ready to go to work, that involve time management. There's great value in regarding time as a resource, to be used effectively, as well as it placing demands on us (not enough time...)

Sunday, 12 May 2013

Engineering for HCI: Upfront effort, downstream pay-back

The end of Engineering Practice 1 (c.1980).
Once upon a time, I was a graduate trainee at an engineering company. The training was organised as three-month blocks in different areas of the company. My first three months was on the (work)shop floor. Spending hours working milling machines and lathes was a bit of shock after studying mathematics at Cambridge. You mean it is possible to use your body as well as your mind to solve problems?!?
I learned that engineering was about the art of the possible (e.g. at that time you couldn't drill holes that went around corners, though 3D printing has now changed our view of what is possible). And also about managing precision: manufacturing parts that were precise enough for purpose. Engineering was inherently physical: about solving problems by designing and delivering physical artefacts that were robust and reliable and fit for purpose. The antithesis of the "trust me, I'm an engineer" view (however much that makes me smile).

Enter "software engineering": arguably, this term was coined to give legitimacy to a certain kind of computer programming. Programming was (and often still is) something of a cottage industry: people building one-off systems that seem to work, but no-one is quite sure of how, or when they might break down. Engineering is intended to reduce the variability and improve the reliability of software systems. And deliver systems that are fit for purpose.

So what does it mean to "engineer" an interactive computer system? At the most recent IFIP Working Group 2.7/13.4 meeting, we developed a video: 'Engineering for HCI: Upfront effort, downstream pay-back'. And it was accepted for inclusion in the CHI2013 Video Showcase. Success! Preparing this short video turned out to be even more difficult than I had anticipated. There really didn't seem to be much consensus on what it means to "engineer" an interactive computer system. There is general agreement that it involves some rigour and systematicity, some use of theory and science to deliver reproducible results, but does the resulting system have to be usable, to be fit for purpose? And how would one measure that? Not really clear.

I started by saying that I once worked for an engineering company. That term is probably fairly unambiguous. But I've never heard of an "interactive systems engineering company" or an "HCI engineering company". I wonder what one of those would look like or deliver.

Saturday, 27 April 2013

When I get older: the uncountable positives


Last week, I was at a presentation by John Clarkson. It was a great talk: interesting, informative, thought provoking… Part-way through it, to make a point about the need for accessible technology, he presented a set of graphs showing how human capabilities decline with age. Basically, vision, hearing, strength, dexterity, etc. peak, on average, in the 20s, and it’s downhill all the way from there. It is possible that only two measurable values increase with age: age itself and grumpiness!

So this raises the obvious question: if we peak on every important variable when we’re in our 20s, why on earth aren’t most senior roles (Chief Executive, President, etc.) held by people in their 20s? Is this because grumpiness is in fact the most important quality, or is it because older people have other qualities that make them better suited to these roles? Most people would agree that it’s the latter.

The requisite qualities are often lumped under the term “wisdom”. I’m not an expert on wisdom, but I imagine there’s a literature defining and decomposing this concept to better understand it. One thing’s for sure though: it can’t be quantified in the way that visual or auditory acuity, strength, etc. can. The things that matter most for senior roles are not easily quantified.

We run a risk, in all walks of life, of thinking that if it can’t be measured then it has no value. In research we see it repeatedly in the view that the “gold standard” for research is controlled (quantifiable) experiments, and that qualitative research is “just stories”. In healthcare, this thinking manifests itself in many ways: in measures of clinical effectiveness and other outcome measures. In HCI, it manifests itself in the weight put on efficiency: of course, efficiency has its place (and we probably all have many examples of inefficient, frustrating interfaces), but there are many cases where the less easily measured outcomes (the quality of a search, the engagement of a game) are much more important.

As vision, hearing, memory, etc. decline, I'm celebrating wisdom and valuing the unmeasurable. Even if it can sound like "just stories'.

Friday, 26 April 2013

Who's the boss? Time for a software update...

Last summer, I gave a lift to a couple of friends to a place I was unfamiliar with. So I used a SatNav to help with the navigation. It was, of course, completely socially unaware. It interrupted our conversation repeatedly, without any consideration for when it is and is not appropriate to interrupt. No waiting for pauses in the conversation. No sensitivity to the importance of the message it was imparting. No apology. Standard SatNav behaviour. And indeed it’s not obvious how one would design it any other way. We turned off the sound and relied solely on the visual guidance after a while.

More recently, a colleague started up his computer near the end of a meeting, and it went into a cycle of displays: don’t turn me off; downloading one of thirty three. I took a record of the beginning of this interaction, but gave up and left way before the downloading had finished.
It might have been fine to pull the plug on the downloading (who knows?) but it wasn’t going to be a graceful exit. The technology seemed to be saying: “You’ve got to wait for me. I am in control here.” Presumably, the design was acceptable for a desktop machine that could just be left to complete the task, but it wasn’t for a portable computer that had to be closed up to be taken from the meeting room.

I have many more examples, and I am sure that every reader does too, of situations where the design of technology is inappropriate because the technology is unaware of the social context in which it is placed, and the development team have been unwilling or unable to make the technology better fit that context.

Saturday, 23 March 2013

"How to avoid mistakes in surgery": a summary and commentary

I've just returned from the US, and my one "must see" catch-up TV programme was "How to avoid mistakes in surgery" (now available on youtube). It's great to see human error in healthcare getting such prominent billing, and being dealt with in such an informative way. This is a very quick synopsis (of the parts I particularly noted).

The programme uses the case of Elaine Bromiley as the starting point and motivation for being concerned about human error in healthcare. The narrator, Kevin Fong, draws on experience from other domains including aviation, firefighting and formula one pit-stops to propose ways to make surgery and anaesthesia safer. Themes that emerge include:
  • the importance of training, and the value of simulation suites (simlabs) for setting up challenging scenarios for practice. This is consistent with the literature on naturalistic decision making, though the programme focuses particularly on the importance of situational awareness (seeing the bigger picture).
  • the value of checklists for ensuring that basic safety checks have been completed. This is based on the work of Atul Gawande, and is gaining recognition in UK hospitals. It is claimed that checklists help to change power relationships, particularly in the operating theatre. I don't know whether there is evidence to support this claim, but it is intuitively appealing. Certainly, it is important in operating theatres, just as it has been recognised as being important in aviation
  • the criticality of handovers from the operating theatre to the intensive care unit. This is where the learning from F1 pitstops comes in. It's about having a system and clear roles and someone who's in charge. For me, the way that much of the essential technology gets piled on the bed around the patient raised a particular question: isn't there a better way to do this?
  • dealing with extreme situations that are outside anything that has been trained for or anticipated. The example that was used for this was the Hudson River plane incident; ironically, on Thursday afternoon, about the time this programme first broadcast, Pete Doyle and I were discussing this incident as an example that isn't really that extreme, because the pilot has been explicitly trained in all the elements of the situation, though not in the particular combination of them that occurred that day. There is a spectrum of resilient behaviour, and this is an example of well executed behaviour, but it's not clear to me that it is really "extreme". The programme refers to the need to build a robust, resilient safety system. Who can disagree with this? It advocates an approach of "standardise until you have to improvise". This is true, but this could miss an important element: standardisation, done badly, reduces the professional expertise and skill of the individual, and it is essential to enhance that expertise if the individual is to be able to improvise effectively. I suspect that clinicians resist checklists precisely because it seems to reduce their professional expertise, when in fact it should be liberating them to develop their expertise at the "edges", to deal better with the extreme situations. But of course that demands that clinical professional development includes opportunities and challenges to develop that expertise. That is a challenge!
The programme finishes with a call to learn from mistakes, to have a positive attitude to errors. Captain Chesley 'Sully' Sullenberger talks about "lessons bought with blood", and about the "moral failure of forgetting these mistakes and having to re-learn them". `On the basis of our research to date, and of discussions with others in the US and Canada studying incident reporting and learning from mistakes, this remains a challenge for healthcare.

Monday, 4 March 2013

Ethics and informed consent: is "informed" always best?

I am in the US, visiting some of the leading research groups studying human factors, patient safety and interactive technologies. This feels like "coming home": not in the sense that I feel more at home in the US than the UK (I don't), but in that these groups care about the same things that we do – namely, the design, deployment and use of interactive medical devices. Talking about this feels like a constant uphill struggle in the UK, where mundane devices such as infusion pumps are effectively "invisible".

One of the issues that has exercised me today is the question of whether it is always ethical to obtain informed consent from the patients who are receiving drugs via infusion devices. The group I'm working with here in Boston have IRB (Institutional Review Board, aka Ethics Board) clearance to just obtain informed consent from the lead nurse on the ward where they are studying the use of devices. Not even from all the nurses, never mind the patients. In one of our studies, we were only allowed to observe a nurse programming a device in the middle of the night if we had obtained permission to observe from the patient before they had fallen asleep (which could have been several hours earlier). Even though we were not gathering any patient data or disturbing the patient in any way. In fact, we were probably disturbing the patient more by obtaining informed consent from them than we would have been just observing the programming of the pump without their explicit knowledge.

We recently discussed the design of a planned study of possible errors with infusion devices with patient representatives. Feedback we got from one of them was: "patients and relatives need to have complete confidence in the staff and equipment, almost blind faith in many instances." There are times when ensuring that patients are fully informed is less important than giving them reassurance. The same is true for all of us when we have no control over the situation.

File:Virgatl.a340-300.g-vfar.800pix.jpgOn the flight on the way here, there was an area of turbulence where we all had to fasten our seatbelts. That's fine. What was less fine what the announcement from the pilot that we shouldn't be unduly worried about this (the implication being that we should be a little bit worried): as a passenger in seat 27F, what use was it for me to worry? No idea! It made the flight less comfortable for me, to no obvious benefit (to me or anyone else).

Similarly with patients: if we accept that studying the use of medical devices has potential long-term benefits, we also need to review how we engage patients in the study. Does obtaining informed consent give them benefits or whatever-is-the-opposite? Maybe there are times where the principle of "blind faith" should dominate.