Saturday 22 December 2018

Artificial (Un)Intelligence in healthcare

I've recently read Meredith Broussard's "Artificial Unintelligence". It's a really good read on both the strengths and the limitations of AI technologies. It is so important to talk about both what AI technologies can do and also what they cannot -- whether that is "cannot" because we haven't got to that point yet or "cannot" because there's some inherent limitation in what technology can offer. For example, in healthcare, technology should get better and better at diagnosing clinical conditions based on suitable descriptions of symptoms together with a growing body of relevant data and more advanced algorithms. The descriptions of symptoms are likely to include information in multiple modalities (visual information, verbal descriptions, etc.) while data are likely to include individual data (biomarkers, patient history, genetic data, etc.) and population data (genomic data, epidemiological data, etc.). Together with novel algorithms, these should get better and better at diagnosis. However, it's unlikely that technology is ever going to be able to deal with some of the complex and subtle challenges of healthcare: making people feel cared for (such as giving someone a meaningful hug), creating the social environment in which it's acceptable to talk through the emotional factors around stigmatised health conditions, etc.

At the Babylon Health event on their AI systems and vision in June this year, there was a lot of emphasis on diagnosis and streamlining care pathways, but conspicuously little on addressing the needs of people with complex health conditions or the broader delivery of care. There was, incidentally, an unnerving moment where an illustrative slide included the names of an entire research group from a London university who I happen to know, suggesting a cavalier approach to data acquisition and informed consent. But that's another story. Many concerns have been raised about the "GP at Hand" model of care delivery, including concerns about equality of access to care, the financial model, the validation of the algorithms used, and the poor fit between the speed of change in the NHS and that required for tech entrepreneurs; some of these issues were covered (though without clear resolution) in a recent episode of Horizon on the BBC. Even more recently, Forbes has published an article on some of the limitations of AI in healthcare – in particular, the commercial (and publicity) imperative to move quickly, which is inconsistent with the safety imperative to move carefully and deliberately. There is a particular danger of belief in the potential of a technology turning into blind faith in its readiness for deployment.

One of the other key topics Broussard talks about "technochauvinism" (the belief that technology is always the solution to any problem). We really need to develop a more robust discourse around this. Technology (including tech based around huge datasets and novel AI algorithms) has really exciting potential, but it needs to be understood, validated, tested carefully in practice. And its limitations need to be discussed as well as its strengths. It's so easy to be partisan; it seems to demand more of people to have a balanced and evidenced discourse so that we can introduce innovations that are really effective while finding ways to value and deliver on the aspects of healthcare that technology can't address too.