Showing posts with label data science. Show all posts
Showing posts with label data science. Show all posts

Saturday, 22 December 2018

Artificial (Un)Intelligence in healthcare

I've recently read Meredith Broussard's "Artificial Unintelligence". It's a really good read on both the strengths and the limitations of AI technologies. It is so important to talk about both what AI technologies can do and also what they cannot -- whether that is "cannot" because we haven't got to that point yet or "cannot" because there's some inherent limitation in what technology can offer. For example, in healthcare, technology should get better and better at diagnosing clinical conditions based on suitable descriptions of symptoms together with a growing body of relevant data and more advanced algorithms. The descriptions of symptoms are likely to include information in multiple modalities (visual information, verbal descriptions, etc.) while data are likely to include individual data (biomarkers, patient history, genetic data, etc.) and population data (genomic data, epidemiological data, etc.). Together with novel algorithms, these should get better and better at diagnosis. However, it's unlikely that technology is ever going to be able to deal with some of the complex and subtle challenges of healthcare: making people feel cared for (such as giving someone a meaningful hug), creating the social environment in which it's acceptable to talk through the emotional factors around stigmatised health conditions, etc.

At the Babylon Health event on their AI systems and vision in June this year, there was a lot of emphasis on diagnosis and streamlining care pathways, but conspicuously little on addressing the needs of people with complex health conditions or the broader delivery of care. There was, incidentally, an unnerving moment where an illustrative slide included the names of an entire research group from a London university who I happen to know, suggesting a cavalier approach to data acquisition and informed consent. But that's another story. Many concerns have been raised about the "GP at Hand" model of care delivery, including concerns about equality of access to care, the financial model, the validation of the algorithms used, and the poor fit between the speed of change in the NHS and that required for tech entrepreneurs; some of these issues were covered (though without clear resolution) in a recent episode of Horizon on the BBC. Even more recently, Forbes has published an article on some of the limitations of AI in healthcare – in particular, the commercial (and publicity) imperative to move quickly, which is inconsistent with the safety imperative to move carefully and deliberately. There is a particular danger of belief in the potential of a technology turning into blind faith in its readiness for deployment.

One of the other key topics Broussard talks about "technochauvinism" (the belief that technology is always the solution to any problem). We really need to develop a more robust discourse around this. Technology (including tech based around huge datasets and novel AI algorithms) has really exciting potential, but it needs to be understood, validated, tested carefully in practice. And its limitations need to be discussed as well as its strengths. It's so easy to be partisan; it seems to demand more of people to have a balanced and evidenced discourse so that we can introduce innovations that are really effective while finding ways to value and deliver on the aspects of healthcare that technology can't address too.

Wednesday, 17 August 2016

Reflections on two days in a smart home

I've just had the privilege of spending two days in the SPHERE smart home in Bristol. It has been an interesting experience, though much less personally challenging than I had expected. For example, it did not provoke the intensity of reaction from me that wearing a fitbit did. What have I learned? That a passive system that just absorbs data that can't be inspected or interacted with by the occupier quickly fades into the background, but that it demands huge trust of the occupant (because it is impossible to anticipate what others can learn about one's behaviour from data that one cannot see). And that as well as being non-threatening, technology has to have a meaningful value and benefit to the user.

Reading the advance information about staying in the SPHERE house, I was reassured that they have considered safety and privacy issues well. I wasn't sure what to expect of the wearable devices or how accurate they would be. My experience of wearing a fitbit previously had left me with low expectations of accuracy. I anticipated that wearing devices in the house might make me feel like a lab rat, and I was concerned about wearing anything outside the house. It turned out that the only wearable was on the wrist, and was only worn in the house anyway, so less obtrusive than commercial wearables.

I had no idea of what interaction mechanisms to expect: I expected to be able to review the data that is being gathered in real time and wondered whether I would be able to draw any inferences from that data. Wrong! The data was never available for inspection, because of the experimental status of the house at the time.

When we arrived, it was immediately obvious that the house is heavily wired, but most of the technology is one-way (sucking information without giving anything back to the participant). Most of the rooms are quite sparse and magnolia. The dining room feels very high-tech, with wires and chips and stuff all over the place – more like a lab than a home. To me, this makes that room a very unwelcoming place to be, so we chose to eat dinner in the living room.

I was much more aware of the experimental aspects of the data gathering (logging our activities) than of the lifestyle (and related) monitoring. My housemate seemed to be quite distracted by the video recording for a while; I was less distracted by it than I had expected. The fact that I cannot inspect the data means that I have no option to reflect on it, so it quickly became invisible to me.
 
The data gathering that we did manually was meant to be defining the ‘ground truth’, but with the best will in the world I’m not sure how accurate the data we’ll provide was – we both keep forgetting to carry the phones everywhere with us, and kept forgetting to start new activities or finish completed one. Recording activities involves articulating the intention to do something (such as making a hot drink or putting shopping away) just before starting to do it, and then articulating that it has been finished when it’s over. This isn't natural! Conversely, at one point, I happened to put the phone on a bedside table and accidentally started logging "sleep" through the NFC tag!

By day 2, I was finding little things oppressive: the fact that the light in the toilet didn’t work and neither did the bedside lights; the lack of a mirror in the bedroom; the fact that everything is magnolia; and the trailing wires in several places around the house. I hadn't realised how important being "homely" was to me, and small touches like cute doorstops didn't deliver.

To my surprise, the room I found least private (even though it had no video) was the toilet: the room is so small and the repertoire of likely actions so limited that it felt as if the wearable was transmitting details that would be easily interpreted. I have no way of knowing whether this is correct (I suspect it is not).

At one point, the living room got very hot so I had to work out how to open the window; that was non-trivial and involved climbing on the sofa and the window sill to work out how it was secured. I wonder what that will look like as data, but at least we had fresh air! 

By the time we left, I was getting used to the ugliness of the technology, and even to the neutrality of the house colours. I had moved things around to make life easier – e.g., moving the telephone off my bedside table to make space for my water and phone (though having water next to the little PCB felt like an accident waiting to happen).

My housemate worked with the SPHERE team to visualize some data from three previous residents that showed that all three of them had eaten their dinners in the living room rather than the dining room. We both seemed to find this slightly amusing, but also affirming: other people are making the same decision as we did.

The main issue to me was that the ‘smart’ technology had no value to me as an inhabitant in the house in its current experimental state. And I would really expect to go beyond inspectability of data to interactivity before the value becomes apparent. Even then, I’m not sure whether the value is short- or long-term: is it about learning about health and behaviours in the home, or is it about real-time monitoring and alerting for health management? The long-term value will come with the latter; for the former, people might just want a rent-a-kit that allows them to learn about their behaviours and adapt them over maybe 2-3 months. But this is all in the future. The current home is a prototype to test what is technically possible. The team have paid a lot of attention to privacy and trust, but not much yet to value. That's going to be the next exciting challenge...

Tuesday, 26 January 2016

The lifecourse and digital health

I've just been away for the weekend with a group of people of varying ages. Over breakfast, I was chatting with Diane (names have been changed), who surmised that she was the oldest person there. I looked quizzical: surely she's in her 70s and Edna is in her late 80s? But no: apparently, Diane is 88, and thinks that Edna is only 86. Appearances can be deceptive. Diane has a few health niggles (eyesight not as good as it once was, hip occasionally twinges) but she remains fit and active, physically and mentally. I hope I will age as well.

Meanwhile, last week I was at an Alan Turing Institute workshop on "Opportunities and Challenges for Data Intensive Healthcare". The starting point was that data sciences have always played a key role in healthcare provision and deployment of preventative interventions, and that we need novel mathematical and computational techniques to exploit the vast quantities of health and lifestyle data that are now being generated. Better computation is needed to deliver better health management and healthcare at lower cost. And of course people also need to be much more engaged in their own care for care provision to be sustainable.

There was widespread agreement at the meeting that healthcare delivery is in crisis, with rising costs and rising demands, and that there is a need for radical restructuring and rethinking. For me, one of the more telling points made (by a clinician) is that significant resources are expended to little good effect in the interests of keeping people alive, when perhaps they should be left to die peacefully. The phrase used was "torturing people to death". I don't imagine many of us want to die in intensive care or in an operating theatre. Health professionals could use better data analytics to make more informed decisions about when "caring" means intervening and when it means stepping back and letting nature take its course.

In principle, better data, better data analysis, and better personalised health information should help us all to be better manage our own health and wellbeing – not taking over our lives, but enabling us to live our lives to the full. My father-in-law's favorite phrase was "I'd like a bucket full of health please". But there's no suggestion that any of us will (or wants to) live forever. At the meeting, someone suggested that we should be aiming for the "Duracell bunny" approach to life: live well, live long, die quickly. Of course, that won't be possible for everyone (and different people have different conceptions of what it means to "live well").

This presents a real challenge for digital health and for society: to re-think how each and every one of us lives the best life we can, supported by appropriate technology. There's a widespread view that "data saves lives"; let's also try to ensure that the saved lives are worth living!