Saturday, 23 March 2013

"How to avoid mistakes in surgery": a summary and commentary

I've just returned from the US, and my one "must see" catch-up TV programme was "How to avoid mistakes in surgery" (now available on youtube). It's great to see human error in healthcare getting such prominent billing, and being dealt with in such an informative way. This is a very quick synopsis (of the parts I particularly noted).

The programme uses the case of Elaine Bromiley as the starting point and motivation for being concerned about human error in healthcare. The narrator, Kevin Fong, draws on experience from other domains including aviation, firefighting and formula one pit-stops to propose ways to make surgery and anaesthesia safer. Themes that emerge include:
  • the importance of training, and the value of simulation suites (simlabs) for setting up challenging scenarios for practice. This is consistent with the literature on naturalistic decision making, though the programme focuses particularly on the importance of situational awareness (seeing the bigger picture).
  • the value of checklists for ensuring that basic safety checks have been completed. This is based on the work of Atul Gawande, and is gaining recognition in UK hospitals. It is claimed that checklists help to change power relationships, particularly in the operating theatre. I don't know whether there is evidence to support this claim, but it is intuitively appealing. Certainly, it is important in operating theatres, just as it has been recognised as being important in aviation
  • the criticality of handovers from the operating theatre to the intensive care unit. This is where the learning from F1 pitstops comes in. It's about having a system and clear roles and someone who's in charge. For me, the way that much of the essential technology gets piled on the bed around the patient raised a particular question: isn't there a better way to do this?
  • dealing with extreme situations that are outside anything that has been trained for or anticipated. The example that was used for this was the Hudson River plane incident; ironically, on Thursday afternoon, about the time this programme first broadcast, Pete Doyle and I were discussing this incident as an example that isn't really that extreme, because the pilot has been explicitly trained in all the elements of the situation, though not in the particular combination of them that occurred that day. There is a spectrum of resilient behaviour, and this is an example of well executed behaviour, but it's not clear to me that it is really "extreme". The programme refers to the need to build a robust, resilient safety system. Who can disagree with this? It advocates an approach of "standardise until you have to improvise". This is true, but this could miss an important element: standardisation, done badly, reduces the professional expertise and skill of the individual, and it is essential to enhance that expertise if the individual is to be able to improvise effectively. I suspect that clinicians resist checklists precisely because it seems to reduce their professional expertise, when in fact it should be liberating them to develop their expertise at the "edges", to deal better with the extreme situations. But of course that demands that clinical professional development includes opportunities and challenges to develop that expertise. That is a challenge!
The programme finishes with a call to learn from mistakes, to have a positive attitude to errors. Captain Chesley 'Sully' Sullenberger talks about "lessons bought with blood", and about the "moral failure of forgetting these mistakes and having to re-learn them". `On the basis of our research to date, and of discussions with others in the US and Canada studying incident reporting and learning from mistakes, this remains a challenge for healthcare.

Monday, 4 March 2013

Ethics and informed consent: is "informed" always best?

I am in the US, visiting some of the leading research groups studying human factors, patient safety and interactive technologies. This feels like "coming home": not in the sense that I feel more at home in the US than the UK (I don't), but in that these groups care about the same things that we do – namely, the design, deployment and use of interactive medical devices. Talking about this feels like a constant uphill struggle in the UK, where mundane devices such as infusion pumps are effectively "invisible".

One of the issues that has exercised me today is the question of whether it is always ethical to obtain informed consent from the patients who are receiving drugs via infusion devices. The group I'm working with here in Boston have IRB (Institutional Review Board, aka Ethics Board) clearance to just obtain informed consent from the lead nurse on the ward where they are studying the use of devices. Not even from all the nurses, never mind the patients. In one of our studies, we were only allowed to observe a nurse programming a device in the middle of the night if we had obtained permission to observe from the patient before they had fallen asleep (which could have been several hours earlier). Even though we were not gathering any patient data or disturbing the patient in any way. In fact, we were probably disturbing the patient more by obtaining informed consent from them than we would have been just observing the programming of the pump without their explicit knowledge.

We recently discussed the design of a planned study of possible errors with infusion devices with patient representatives. Feedback we got from one of them was: "patients and relatives need to have complete confidence in the staff and equipment, almost blind faith in many instances." There are times when ensuring that patients are fully informed is less important than giving them reassurance. The same is true for all of us when we have no control over the situation.

File:Virgatl.a340-300.g-vfar.800pix.jpgOn the flight on the way here, there was an area of turbulence where we all had to fasten our seatbelts. That's fine. What was less fine what the announcement from the pilot that we shouldn't be unduly worried about this (the implication being that we should be a little bit worried): as a passenger in seat 27F, what use was it for me to worry? No idea! It made the flight less comfortable for me, to no obvious benefit (to me or anyone else).

Similarly with patients: if we accept that studying the use of medical devices has potential long-term benefits, we also need to review how we engage patients in the study. Does obtaining informed consent give them benefits or whatever-is-the-opposite? Maybe there are times where the principle of "blind faith" should dominate.