Saturday 22 December 2018

Artificial (Un)Intelligence in healthcare

I've recently read Meredith Broussard's "Artificial Unintelligence". It's a really good read on both the strengths and the limitations of AI technologies. It is so important to talk about both what AI technologies can do and also what they cannot -- whether that is "cannot" because we haven't got to that point yet or "cannot" because there's some inherent limitation in what technology can offer. For example, in healthcare, technology should get better and better at diagnosing clinical conditions based on suitable descriptions of symptoms together with a growing body of relevant data and more advanced algorithms. The descriptions of symptoms are likely to include information in multiple modalities (visual information, verbal descriptions, etc.) while data are likely to include individual data (biomarkers, patient history, genetic data, etc.) and population data (genomic data, epidemiological data, etc.). Together with novel algorithms, these should get better and better at diagnosis. However, it's unlikely that technology is ever going to be able to deal with some of the complex and subtle challenges of healthcare: making people feel cared for (such as giving someone a meaningful hug), creating the social environment in which it's acceptable to talk through the emotional factors around stigmatised health conditions, etc.

At the Babylon Health event on their AI systems and vision in June this year, there was a lot of emphasis on diagnosis and streamlining care pathways, but conspicuously little on addressing the needs of people with complex health conditions or the broader delivery of care. There was, incidentally, an unnerving moment where an illustrative slide included the names of an entire research group from a London university who I happen to know, suggesting a cavalier approach to data acquisition and informed consent. But that's another story. Many concerns have been raised about the "GP at Hand" model of care delivery, including concerns about equality of access to care, the financial model, the validation of the algorithms used, and the poor fit between the speed of change in the NHS and that required for tech entrepreneurs; some of these issues were covered (though without clear resolution) in a recent episode of Horizon on the BBC. Even more recently, Forbes has published an article on some of the limitations of AI in healthcare – in particular, the commercial (and publicity) imperative to move quickly, which is inconsistent with the safety imperative to move carefully and deliberately. There is a particular danger of belief in the potential of a technology turning into blind faith in its readiness for deployment.

One of the other key topics Broussard talks about "technochauvinism" (the belief that technology is always the solution to any problem). We really need to develop a more robust discourse around this. Technology (including tech based around huge datasets and novel AI algorithms) has really exciting potential, but it needs to be understood, validated, tested carefully in practice. And its limitations need to be discussed as well as its strengths. It's so easy to be partisan; it seems to demand more of people to have a balanced and evidenced discourse so that we can introduce innovations that are really effective while finding ways to value and deliver on the aspects of healthcare that technology can't address too.

Wednesday 28 November 2018

Palliative care technology (professional interest meets intensely personal experience)

About 10 years ago, when I first started working on infusion devices, I met a medical director who did a lot of work in hospices; he noted that the motors in the syringe drivers in use at that time hummed gently while delivering medication, and that many families hated the constant reminder that this meant that their loved one was on end-of-life care.

Recently, I have experienced this at first-hand, except that the syringe driver being used was mercifully quiet, and did nothing to remind us of its presence. It only really featured when Dad (now very peacefully sleeping) had to be turned to a different position, when the care professionals had to take care not to occlude or dislodge the line. And yet this simple device had huge emotional import: it still, silently, announced that the end of a life was near. It was exactly the ending that we had agreed we would want if possible: peaceful, not disrupted by any invasive or disruptive interventions, with family around. And yet I still found myself wanting to remove the driver because it signified a conscious decision, or determination, that Dad was indeed going to die. Maybe if I removed the driver then Dad would spring back into life. So I find myself with very mixed emotions about the driver: gratitude that it did indeed contribute to a peaceful, pain-free ending combined with distress that it announced and determined the inevitability of that ending.

As a technology professional, I of course also found the device interesting: the nurse who set it up did so with great care, and clearly found it easy to use: it is a task she performs routinely. But the three aspects that we highlight in our paper on "Bags, Batteries and Boxes" all came up in the conversation around the driver. The disposable bag provided was identical to the one featured on the left in Figure 1 of our paper (though all it did was notionally hide the driver which was, in any case, hidden under the sheet). The nurse replaced the battery at the start and after 24 hours to minimise the risk of it running out of charge. The box was locked to prevent tampering (correct) but, bizarrely, when it came to removing the driver after Dad's death, I was the only person in the room who knew where the key was located, which rather undermined its role as protection against tampering. Since no nurse visited after Dad's death and I didn't want him to be moved while still attached to said driver, I asked the doctor to remove the butterfly needle. Clearly, the doctor had never done such a thing before, reinforcing findings from our study of incident reports involving syringe drivers used in private homes that doctors are sometimes put in the position of having to use technology they have no familiarity with. Thankfully, the doctor did kindly remove the line, gently as if removing from a living patient, and we could send Dad off suitably clothed and unencumbered by redundant technology. I can only assume that the driver was returned to the community nurse team later.

I'll close by thanking the amazing staff at Tegfield House, who cared so diligently for both Dad and us and the equally amazing NHS nurses and doctors who cared for Dad over many years, and particularly in his final hours.

Monday 25 June 2018

Happy 70th birthday (to digital and to the NHS)!

It's been widely publicised that it's the 70th birthday of the NHS on 5th July this year. When preparing to be interviewed for a Telegraph podcast on digital health, I realised that it's also the 70th birthday of the "Manchester Baby", the first stored program computer (21st June). So in a very real sense, the parentage of digital health in the UK was born 70 years ago. There are other relevant birthdays to celebrate too, such as the 60th of the Human Factors Journal (for which usable health technology is an important theme) and the 500th of the Royal College of Physicians.

Manchester baby head onWe've come such a long way in 70 years. Many of the major advances in that time can be attributed to a better understanding of hygiene and antibiotics, and to pharmaceuticals more generally. As advances in pharma are becoming more costly, digitally enabled health and wellbeing are likely to provide greater gains.

The history of analogue medical devices goes back hundreds, or even thousands, of years. For example surgical knives are believed to date from Mesolithic times (8000BC), syringes from the 1500s, and the first stethoscope from 1816.  

There have been transformational developments in digital health technologies from the 1970s onwards. People may find it difficult to remember back to the times when there was no such thing as intensive care (as we now understand it) but it has emerged within our lifetimes: critical care medicine, with its focus on continuous monitoring and intervention, was established in the late 1950s. Imaging is another area that has grown in significance from x-rays – largely since the 1970s, when Computerised Tomography (CT scans) and Magnetic Resonance Imaging (MRI) were introduced. Now computing is fast enough that it is becoming possible to use imaging in real time during surgery, and to introduce interactive 3D images (built up from 2D slices).
 
These are part of another phase of rapid developments which are also being brought about by the availability of consumer devices, including wearables, that are becoming accurate enough to substitute for professional devices. Also, big data; for example, genomics is improving our understanding of the interrelationships between genes and their combined influence on health, while consumer genetic testing kits are making new health-relevant information available to the individual.

As the digital computer and the NHS reach their 70th birthdays, we are seeing huge advances in the technologies that address relatively simple problems. However, we have made much less progress in the technologies for complex problems. Go into any hospital and look at the complexity of the systems clinicians have to use – e.g. 20-30 different interactive technologies on a general ward, all with different user interfaces, all of which every nurse is expected to be able to use. From a patient perspective, someone managing multiple health conditions has to integrate information between the different tools and specialisms they have to engage with. We are seeing growing friction as what is theoretically possible slips past what is currently practicable.
 
What do the next 70 years promise? It is of course hard to say. A paperless NHS? – probably not by 2020, but maybe by 2088. Patient controlled electronic health records? – maybe if people are appropriately educated and supported in managing the burden of care; this will require us to address health inequalities brought about by differentials in income, education, technology literacy, health literacy, etc. The huge challenge is not the technology, but the individual and social factors, and the regulations, around it. This will require a new approach to data privacy and security, funding models and regulations that are fit for the 21st century, and education for clinicians, technologists and the public to ensure these changes are beneficial for all.
 
Of course, the NHS is just one healthcare delivery organization, amongst many globally. Some other health providers are doing things on a shoestring but overtaking the West in many ways by being agile – e.g., investing straight in mobile technology.
 
However, whatever advances we see in technology, care is still first and foremost about the human touch. The technology is there to support people.

Sunday 18 March 2018

Invisible work

I have been on strike for much of the past four weeks – at least notionally. The truth is more nuanced than that, because I don't actually want my students and other junior colleagues to be disadvantaged by this action. I am, after all, fighting for the future of university education: their future. Yet I do want senior management and the powerful people who make decisions about our work and our pensions to be aware of the strength of feeling, as well as the rational arguments, around the pensions issue.

There have been some excellent analyses of the problem, by academic experts from a range of disciplines. Here are some of my favorites (in no particular order):
As well as standing on picket lines, marching, discussing the issues around the strike, and not doing work that involves crossing said picket lines, I have continued to do a substantial amount of work. It has made me think more about the nature of invisible work. Bonnie Nardi and Yrjo Engestrom identify four kinds of invisible work:
  1. work done in invisible places, such as the highly skilled behind-the-scenes work of reference librarians; to this I would add most of the invisible work done by university staff, out of sight and out of hours.
  2. work defined as routine or manual that actually requires considerable problem solving and knowledge, such as the work of telephone operators; don't forget completing a ResearchFish submission or grappling with the Virtual Learning Environment or many other enterprise software systems.
  3. work done by invisible people such as domestics, (and sometimes Athena SWAN teams!)
  4. informal work processes that are not part of anybody’s job description but which are crucial for the collective functioning of the workplace, such as regular but open-ended meetings without a specific agenda, informal conversations, gossip, humor, storytelling.
The time for (4) has been sadly eroded over the years as demands and expectations have risen without corresponding rises in resourcing.

To these, I would add the invisible work that is invisible because it is apparently ineffectual. E.g., I wrote to our Provost about 12 days ago, but I have no evidence that it was read; it certainly hasn't been responded to in any visible way (reproduced below for the record).

The double-think required to simultaneously be on strike while also delivering on time-limited commitments to colleagues and students has forced me to also develop new approaches to revealing and hiding work. For example, 
  • I have started logging my own work hours so that the accumulated time is visible to me, and although I've been working way more than the hours set out in the Working Time Directive, I'm going to try to bring the time worked down to comply with that directive. This should help me say "no" more assertively in future. That's the theory, at least...
  • I have started saving emails as drafts so as not to send them "out of hours". There are 21 items in my email outbox as I type this; I'll look incredibly productive first thing on Monday morning!
And finally, I will make visible the letter I wrote to the Provost:


Thank you for this encouraging message last week. You are right that none of us takes strike action lightly. We all want to be doing and supporting excellent teaching, research and knowledge transfer, but we are extremely concerned about the proposed pension changes, and we have found no other way to be heard.

I’ve worked in universities since 1981 and this is the first time I have taken strike action. The decision to strike has been one of the harder decisions I have taken in my professional career, but I think the impact of the proposed pension changes on our junior colleagues (and hence on the future of universities) is unacceptable, and I am not persuaded that a DB scheme is unaffordable.

Please continue to work with the other university leaders to find an early resolution to this dispute. UCL isn’t just estates and financial surplus: as you say, it’s a community of world-leading, committed people who work really hard, and who merit an overall remuneration package that is reflective of that. That includes pensions that aren’t a stock market lottery for each individual.

I’d like to be in my office meeting PhD students and post-docs next Monday morning, and in a lecture theatre with my MSc students on Monday afternoon. Please do everything in your power to bring this dispute to a quick resolution so that there’s a real possibility that “normal service” can be resumed next week.

Sunday 4 March 2018

How not to design the user experience: update 2018

In November 2014, I wrote a summary of my experience of entering research data in Researchfish. Since then, aspects of this system have improved: at least some of the most obvious bugs have been ironed out, and being able to link data from ORCID makes one tedious aspect of the task (entering data about particular publications) significantly easier. So well done to the ResearchFish team on fixing those problems. It's a pity it's still not fit for purpose, despite the number of funders who are supporting (mandating) use of this system.

The system is still designed without any consideration of how people conceptualise their research outputs – or at least, not how I do. According to ResearchFish, it takes less than a lunchbreak to enter all the data. There are two problems with this:
1. Few academics that I know have the time to take a lunch break.
2. So far, today, it has taken me longer than that just to work out a strategy for completing this multi-dimensional task systematically. It's like 3-D Sudoku, but less fun.

Even for publications, it's a two-dimensional task: select publications (e.g., from ORCID) and select grants to which they apply. But if you just do this as stated, then you get many-to-many relationships, with every publication assigned to grants that it isn't associated with as well as one(s) it is. And yes, I have tested this. So you have to decide which grant you're going to focus on, then go through the list and add those... then go around the loop (add new publications > select ORCID > search > select publications > select grant) repeatedly for all grants. Maybe there's a faster way to do it, but I haven't discovered that yet. Oh: and if you make a mistake, there isn't an easy way to correct it, so there is probably over-reporting as well as under-reporting on many grants.
I'm still trying to guess what "author not available" means in the information about a publication. My strategy for working out which paper each line refers to has been to keep Google Scholar open in parallel and search for the titles there, because those make more sense to me.

In the section on reporting key findings of a grant, when you save the entry, it returns you to the same page. Why would you want to save multiple times, rather than just moving on to the next step? Why isn't there a 'next' option? And why, when you have said there is no update on a completed grant, does it still take you to the update page? What was the point of the question?

When you're within the context of one award, and you select publications, it shows all publications for all awards (until you explicit select the option to focus on this award). Why? I'm in a particular task context...

When you're in the context of an award where you are simply a team member, you can filter by publications you've added, or by publications linked to this award, but not by publications that you've added that are also linked to this award. Those are the ones that I know about, and the ones that I want to check / update.

Having taken a coffee break, I returned to the interface to discover I had been logged out. I don't actually know my login details because the first time I logged in this morning I did so via ORCID. That option isn't available on the login page that appears after time-out. This is further evidence of poor system testing and non-existent user testing.

I could go on, but life is too short. There is no evidence of the developers having considered either conceptual design or task structures. There is no evidence that the system has actually been tested by real users who have real data entry tasks and time constraints. I really cannot comprehend how so many funders can mandate the use of a system that is so poorly designed, other than because they have the power to do so.

Monday 19 February 2018

Qualitative research comes of age

For a long, long time, qualitative research has felt like a "poor relation" to quantitative: so much more subjective, so much harder to generalise, so much more reliant on the skills of the researcher to deliver quality.

I'm delighted to see that it's becoming more mainstream - at least based on the evidence of a couple of recent publications that appear in the mainstream research literature and that simply report on how to report qualitative research well. One is in the medical literature, and the other in the psychology literature. The questions of what constitutes high quality qualitative research, and how to report it are ones we have grappled with, particularly when the findings of a study don't align well with the original aims because you discover that those aims were based on incorrect assumptions about the situation. I still get the sense that there is asymmetry in the situation: that qualitative researchers have to justify their methods to quantitative researchers much more forcefully than the converse. But this seems like progress nevertheless.

Quantitative tells you about outcomes, but gives little (or no) insight into causes or processes. To improve outcomes, you really need to understand causes too...

Friday 16 February 2018

Learning from past incidents?

I've been thinking about incident reporting in healthcare, in terms of what we can learn about the design of medical devices, based on both what is theoretically possible and also what actual incident reports show us.

Incident reporting systems (e.g., NRLS) are a potential source of information about poor usability and poor utility of interactive medical devices. However, because the health care culture typically focuses on outcomes rather than processes, instances of sub-optimal use typically pass unremarked. There is growing concern that there is under-reporting of incidents, but little firm evidence on the scale of that under-reporting. One study that compared observed errors against reported incidents involving intravenous medications identified 55 rate deviation and medication errors in nine hours of observation; 48 such incidents had been reported through the hospital incident reporting system over the previous two years, suggesting a reporting rate of about 0.1%. Firth-Cozens et al investigated causes for low reporting rates, even when clinicians had identified errors or examples of poor care. All groups of participants “considered that minor, commonplace or unintentional mistakes, ‘genuine or honest’ errors, one-off errors, or ones for which a subordinate is ‘obviously sorry’ or has insight, need not be reported”. Examples reported by their participants included incidents involving infusion pumps: problems for which the design or protocols for use of the pumps were contributing factors. Even when incidents are reported, those reports might not deliver insights into what went wrong. In our recent study of incident reports involving home use of infusion pumps, we found that reports gave much greater insight into how people detected and recovered from the device not working than into what had caused the device not to work properly in the first place.

While incident reporting systems might be one source of information on poor design or use of interactive devices in healthcare, this is not a reliable route for identifying instances of poor design. Once an incident is reported it is important that the role of device design in contributing to the incident be properly considered, and not simply dismissed with the common response that the device “performed as designed” and that it was therefore a user error.