Showing posts with label healthcare. Show all posts
Showing posts with label healthcare. Show all posts

Monday, 25 March 2019

Don't forget!

Our mother has advanced Alzheimer's disease. Our father had vascular dementia. For a long time, we found it difficult to locate resources that helped to understand the diseases, our parents' experiences, or what we (as their children) could do to support them. We found quite a lot of material that was patronising, overly general, or overly technical.

The following are some of the resources that I have found most helpful to date (in no particular order):
  • Wendy Mitchell's personal narrative of her experience of early onset Alzheimer's gives an amazing insight into the challenges and strategies that one person established to overcome them.
  • There are many variants of dementia, with different causes and patterns of progress. These are well summarised by Dementia Australia.
  • Alzheimer's disease is the most common form of dementia. This article in Nature Education gives some insight into the specifics of AD.
  • Five "pocket" (i.e., brief!)  films about aspects of Alzheimer's capture the science in neat little chunks.
  • The Dementia UK site gives more insight into managing and living with AD. Follow links from there to find out about other kinds of dementia.
  • A personal narrative by a child of someone living with dementia emphasises the value of good care homes and their specialist care.
  • As someone loses abilities, it's useful to find products that are specifically designed to support (and bring pleasure to) people with dementia, such as Unforgettable.
  • In the UK legal aspects of supporting someone with dementia include setting up legal power of attorney while they still have mental capacity to do so, and possibly applying for attendance allowance to help towards the cost of care when it becomes necessary.
Maybe one day I'll link these notes to theory of information seeking, but for now it's just a place to gather some links.

Wednesday, 28 November 2018

Palliative care technology (professional interest meets intensely personal experience)

About 10 years ago, when I first started working on infusion devices, I met a medical director who did a lot of work in hospices; he noted that the motors in the syringe drivers in use at that time hummed gently while delivering medication, and that many families hated the constant reminder that this meant that their loved one was on end-of-life care.

Recently, I have experienced this at first-hand, except that the syringe driver being used was mercifully quiet, and did nothing to remind us of its presence. It only really featured when Dad (now very peacefully sleeping) had to be turned to a different position, when the care professionals had to take care not to occlude or dislodge the line. And yet this simple device had huge emotional import: it still, silently, announced that the end of a life was near. It was exactly the ending that we had agreed we would want if possible: peaceful, not disrupted by any invasive or disruptive interventions, with family around. And yet I still found myself wanting to remove the driver because it signified a conscious decision, or determination, that Dad was indeed going to die. Maybe if I removed the driver then Dad would spring back into life. So I find myself with very mixed emotions about the driver: gratitude that it did indeed contribute to a peaceful, pain-free ending combined with distress that it announced and determined the inevitability of that ending.

As a technology professional, I of course also found the device interesting: the nurse who set it up did so with great care, and clearly found it easy to use: it is a task she performs routinely. But the three aspects that we highlight in our paper on "Bags, Batteries and Boxes" all came up in the conversation around the driver. The disposable bag provided was identical to the one featured on the left in Figure 1 of our paper (though all it did was notionally hide the driver which was, in any case, hidden under the sheet). The nurse replaced the battery at the start and after 24 hours to minimise the risk of it running out of charge. The box was locked to prevent tampering (correct) but, bizarrely, when it came to removing the driver after Dad's death, I was the only person in the room who knew where the key was located, which rather undermined its role as protection against tampering. Since no nurse visited after Dad's death and I didn't want him to be moved while still attached to said driver, I asked the doctor to remove the butterfly needle. Clearly, the doctor had never done such a thing before, reinforcing findings from our study of incident reports involving syringe drivers used in private homes that doctors are sometimes put in the position of having to use technology they have no familiarity with. Thankfully, the doctor did kindly remove the line, gently as if removing from a living patient, and we could send Dad off suitably clothed and unencumbered by redundant technology. I can only assume that the driver was returned to the community nurse team later.

I'll close by thanking the amazing staff at Tegfield House, who cared so diligently for both Dad and us and the equally amazing NHS nurses and doctors who cared for Dad over many years, and particularly in his final hours.

Friday, 7 April 2017

If the user can’t use it, it doesn’t work: focusing on buying and selling


"If the user can’t use it, it doesn’t work": This phrase, from Susan Dray, was originally addressed at system developers. It presupposes good understanding of who the intended users are and what their capabilities are. But the same applies in sales and procurement.

In hospital (and similar) contexts, this means that procurement processes need to take account of who the intended users of any new technology are. E.g., who are the intended users of new, wireless integrated glucometers or of new infusion pumps that need to have drug libraries installed, maintained... and also be used during routine clinical care? What training will they need? How will the new devices fit into (or disrupt) their workflow? Etc. If any of the intended users can’t use it then the technology doesn’t work.

I have just encountered an analogous situation with some friends. These friends are managing multiple clinical conditions (including Alzheimer’s, depression, the after-effects of a mini-stroke, and type II diabetes) but are nevertheless living life to the full and coping admirably. But recently they were sold a sophisticated “Agility 3” alarm system, comprising a box on the wall with multiple buttons and alerts, a wearable “personal attack alarm”, and two handheld controllers (as well as PIR sensors, a smoke alarm and more). They were persuaded that this would address all their personal safety and home security needs. I don’t know whether the salesperson referred directly or obliquely to any potential physical vulnerability. But actually their main vulnerability was that they no longer have the mental capacity to assess the claims of the salesperson, let alone the capacity to use any technology that is more sophisticated than an on/off switch. If the user can’t use it, it doesn’t work. By this definition, this alarm system doesn’t work. Caveat emptor, but selling a product that is meant to protect people when the net effect is to further expose their vulnerability is crass miss-selling. How ironic!

Wednesday, 15 March 2017

Safer Healthcare



I've just finished reading Safer Healthcare. For me, the main take-home message is the different kinds of safety that pertain to different situations. Vincent and Amalberti describe three different approaches to safety:
  • ultra-safe, avoiding risk, amenable to standardised practices and checklists. This applies to the areas of healthcare where it is possible to define (and follow) standardised procedures.
  • high-reliability, managing risks, which I understand as corresponding to "resilient" or "safety II" – empowering people within the system to learn and adapt. This seems to apply to a lot of healthcare, where the variabilities can't be eliminated, but can be managed.
  • ultra-adaptive, embracing risk. This relies on the skills and resilience of individuals. This applies to innovative techniques (the very first heart transplant, for example) where it really isn't possible to plan fully ahead of time because so much is unknown and it relies on the skills of the individual.
Image may contain: outdoorThe authors draw on the example of rock climbing. The safest forms of climbing (with a top-rope, which really does minimise the chances of hitting the ground from a fall) are in the first category; most climbing falls into the second: we manage risk by carefully following best practice while accepting that there are inherent risks; people more adventurous than me (and more skilled) push the boundaries of what is possible – both for themselves and for the community. But it is also possible to compromise safety, as graphically described by James McHaffie addressing Eve Lancashire whose attitude to safety worries him (see about half way through the post).

Vincent and Amalbeti's categorisation highlights why comparing healthcare with aviation in terms of safety is of limited value: commercial aviation is, in their terms, ultra-safe, with standardised procedures and a lot of barriers to risk; healthcare involves far too much variability to all be amenable to such an approach.

Another point Vincent and Amalberti make is that incidents / harm very often don't happen within one episode of care, but evolve over time. I am reminded of a similar point made in a very different context by Brown and Duguid, who described the way that photocopier engineers learn about their work (and the variability across machines and situations): the describe it as being like the "passage of the sun across the sky" – i.e., it's not really clear when it starts or end, or even exactly how it develops moment to moment. So many activities – and incidents – don't have a clear start and end. Possibly the main thing that distinguishes a reportable incident is that there is a point at which someone realises that something has gone wrong...

Sunday, 12 March 2017

Public health -- personal health



I've just re-read the Academy of Medical Sciences report "Improving the health of the public by 2040". It makes many insightful points, particularly about the need for multidisciplinary training to deliver future professionals who can work across disciplinary silos – whether within healthcare and medical disciplines or with other disciplines such as computing and other branches of engineering. Also, the likely importance of digital tools and "big data" in the future. It does, however, focus entirely on the population, apparently ignoring the fact that the population is made up of individuals, who each control their own health – at least to the extent that they can choose to comply (or adhere) with medical advice and can choose whether or not to share data about themselves. It seems to miss a big opportunity if we don't link the individual to the population because the health outcomes and practices of the population emerge from the individual behaviours of each person. Sure, the behaviours of individuals are shaped by population-level factors, but they aren't determined by them. It's surely time to link the individual and the population better.


This can be compared with the Wachter Review, which focused on the value of electronic health records and other digital technologies for delivering safer and more effective care. That review also highlighted the need for professionals with skills that cross information technologies and clinical expertise, but it also considers issues such as engagement and usability. It notes that "implementing health IT is one of the most complex adaptive changes in the history of healthcare". Without addressing the complexity (which is a consequence of the number of individuals, roles, organisations and cultures involved), it's going to be difficult to achieve population-level improvements – by 2040, or at any time.

Friday, 28 October 2016

Guidance on creating, evaluating and implementing effective digital healthcare interventions

This is an unconventional blog post – essentially, a place to index a set of papers. Last year, I participated in a workshop: ‘How to create, evaluate and implement effective digital healthcare interventions: development of guidance’. 
The workshop was led by Susan Michie, and resulted in a set of articles discussing key issues facing the development and evaluation of digital behaviour change interventions. There were about 50 participants, from a variety of countries and disciplines. And we all had to work ... on delivering interdisciplinary papers as well as on discussion. The outcome has just been published.
Credits: The workshop was hosted in London by the Medical Research Council, with funding from the Medical Research Council (MRC)/National Institute for Health Research (NIHR) Methodology Research Program, the NIH Office of Behavioral and Social Sciences Research (OBSSR)  and the Robert Wood Johnson Foundation.The workshop papers are being made publicly available with the agreement of the publishers of the American Journal of Preventive Medicine.

Wednesday, 17 August 2016

Reflections on two days in a smart home

I've just had the privilege of spending two days in the SPHERE smart home in Bristol. It has been an interesting experience, though much less personally challenging than I had expected. For example, it did not provoke the intensity of reaction from me that wearing a fitbit did. What have I learned? That a passive system that just absorbs data that can't be inspected or interacted with by the occupier quickly fades into the background, but that it demands huge trust of the occupant (because it is impossible to anticipate what others can learn about one's behaviour from data that one cannot see). And that as well as being non-threatening, technology has to have a meaningful value and benefit to the user.

Reading the advance information about staying in the SPHERE house, I was reassured that they have considered safety and privacy issues well. I wasn't sure what to expect of the wearable devices or how accurate they would be. My experience of wearing a fitbit previously had left me with low expectations of accuracy. I anticipated that wearing devices in the house might make me feel like a lab rat, and I was concerned about wearing anything outside the house. It turned out that the only wearable was on the wrist, and was only worn in the house anyway, so less obtrusive than commercial wearables.

I had no idea of what interaction mechanisms to expect: I expected to be able to review the data that is being gathered in real time and wondered whether I would be able to draw any inferences from that data. Wrong! The data was never available for inspection, because of the experimental status of the house at the time.

When we arrived, it was immediately obvious that the house is heavily wired, but most of the technology is one-way (sucking information without giving anything back to the participant). Most of the rooms are quite sparse and magnolia. The dining room feels very high-tech, with wires and chips and stuff all over the place – more like a lab than a home. To me, this makes that room a very unwelcoming place to be, so we chose to eat dinner in the living room.

I was much more aware of the experimental aspects of the data gathering (logging our activities) than of the lifestyle (and related) monitoring. My housemate seemed to be quite distracted by the video recording for a while; I was less distracted by it than I had expected. The fact that I cannot inspect the data means that I have no option to reflect on it, so it quickly became invisible to me.
 
The data gathering that we did manually was meant to be defining the ‘ground truth’, but with the best will in the world I’m not sure how accurate the data we’ll provide was – we both keep forgetting to carry the phones everywhere with us, and kept forgetting to start new activities or finish completed one. Recording activities involves articulating the intention to do something (such as making a hot drink or putting shopping away) just before starting to do it, and then articulating that it has been finished when it’s over. This isn't natural! Conversely, at one point, I happened to put the phone on a bedside table and accidentally started logging "sleep" through the NFC tag!

By day 2, I was finding little things oppressive: the fact that the light in the toilet didn’t work and neither did the bedside lights; the lack of a mirror in the bedroom; the fact that everything is magnolia; and the trailing wires in several places around the house. I hadn't realised how important being "homely" was to me, and small touches like cute doorstops didn't deliver.

To my surprise, the room I found least private (even though it had no video) was the toilet: the room is so small and the repertoire of likely actions so limited that it felt as if the wearable was transmitting details that would be easily interpreted. I have no way of knowing whether this is correct (I suspect it is not).

At one point, the living room got very hot so I had to work out how to open the window; that was non-trivial and involved climbing on the sofa and the window sill to work out how it was secured. I wonder what that will look like as data, but at least we had fresh air! 

By the time we left, I was getting used to the ugliness of the technology, and even to the neutrality of the house colours. I had moved things around to make life easier – e.g., moving the telephone off my bedside table to make space for my water and phone (though having water next to the little PCB felt like an accident waiting to happen).

My housemate worked with the SPHERE team to visualize some data from three previous residents that showed that all three of them had eaten their dinners in the living room rather than the dining room. We both seemed to find this slightly amusing, but also affirming: other people are making the same decision as we did.

The main issue to me was that the ‘smart’ technology had no value to me as an inhabitant in the house in its current experimental state. And I would really expect to go beyond inspectability of data to interactivity before the value becomes apparent. Even then, I’m not sure whether the value is short- or long-term: is it about learning about health and behaviours in the home, or is it about real-time monitoring and alerting for health management? The long-term value will come with the latter; for the former, people might just want a rent-a-kit that allows them to learn about their behaviours and adapt them over maybe 2-3 months. But this is all in the future. The current home is a prototype to test what is technically possible. The team have paid a lot of attention to privacy and trust, but not much yet to value. That's going to be the next exciting challenge...

Tuesday, 26 January 2016

The lifecourse and digital health

I've just been away for the weekend with a group of people of varying ages. Over breakfast, I was chatting with Diane (names have been changed), who surmised that she was the oldest person there. I looked quizzical: surely she's in her 70s and Edna is in her late 80s? But no: apparently, Diane is 88, and thinks that Edna is only 86. Appearances can be deceptive. Diane has a few health niggles (eyesight not as good as it once was, hip occasionally twinges) but she remains fit and active, physically and mentally. I hope I will age as well.

Meanwhile, last week I was at an Alan Turing Institute workshop on "Opportunities and Challenges for Data Intensive Healthcare". The starting point was that data sciences have always played a key role in healthcare provision and deployment of preventative interventions, and that we need novel mathematical and computational techniques to exploit the vast quantities of health and lifestyle data that are now being generated. Better computation is needed to deliver better health management and healthcare at lower cost. And of course people also need to be much more engaged in their own care for care provision to be sustainable.

There was widespread agreement at the meeting that healthcare delivery is in crisis, with rising costs and rising demands, and that there is a need for radical restructuring and rethinking. For me, one of the more telling points made (by a clinician) is that significant resources are expended to little good effect in the interests of keeping people alive, when perhaps they should be left to die peacefully. The phrase used was "torturing people to death". I don't imagine many of us want to die in intensive care or in an operating theatre. Health professionals could use better data analytics to make more informed decisions about when "caring" means intervening and when it means stepping back and letting nature take its course.

In principle, better data, better data analysis, and better personalised health information should help us all to be better manage our own health and wellbeing – not taking over our lives, but enabling us to live our lives to the full. My father-in-law's favorite phrase was "I'd like a bucket full of health please". But there's no suggestion that any of us will (or wants to) live forever. At the meeting, someone suggested that we should be aiming for the "Duracell bunny" approach to life: live well, live long, die quickly. Of course, that won't be possible for everyone (and different people have different conceptions of what it means to "live well").

This presents a real challenge for digital health and for society: to re-think how each and every one of us lives the best life we can, supported by appropriate technology. There's a widespread view that "data saves lives"; let's also try to ensure that the saved lives are worth living!

Monday, 31 August 2015

The Digital Doctor


I’ve just finished reading The DigitalDoctor by Robert Wachter. It’s published this year, and gives great insight into the US developments in electronic health records, particularly over the past few years: Meaningful Use and the rise of EPIC. The book manages to steer a great course between being personal (about Wachter’s career and the experiences of people around him) and drawing out general themes, albeit from a US perspective. I’d love to see an equivalent book about the UK, but suspect there would be no-one qualified to write it.

The book is simultaneously fantastic and slightly frustrating. I'll deal with the frustrating first: although Wachter claims that a lot of the book is about usability (and indeed there are engaging and powerful examples of poor usability that have resulted in untoward incidents), he seems unaware that there’s an entire discipline devoted to understanding human factors and usability, and that people with that expertise could contribute to the debate: my frustration is not with Wachter, but with the fact that human factors is apparently still so invisible, and there still seems to be an assumption that the only qualification that is needed to be an expert in human factors is to be a human.

The core example (the overdose of a teenage patient with 38.5 times the intended dose of a common antibiotic) is told compellingly from the perspectives of several of the protagonists:

    poor interface design leads to the doctor specifying the dose in mg, but the system defaulting to mg/kg and therefore multiplying the intended dose by the weight of the patient;

    the system issues so many indistinguishable alerts (most very minor) that the staff become habituated to cancelling them without much thought – and one of the reasons for so many alerts is the EHR supplier covering themselves against liability for error;

    the pharmacist who checked the order was overloaded and multitasking, using an overly complicated interface, and trusted the doctor;

    the robot that issued the medication had no ‘common sense’ and did not query the order;

    the nurse who administered the medication was new and didn’t have anyone more senior to quickly check the prescription with, so assumed that all the earlier checks would have caught any error, so the order must be correct;

    the patient was expecting a lot of medication, so didn’t query how much “a lot” ought to be.
This is about design and culture. There is surprisingly little about safer design from the outset (it’s hardly as if “alert fatigue” is a new phenomenon, or as if the user interface design and confusability of units is surprising or new): while those involved in deploying new technology in healthcare should be able to learn from their own mistakes, there’s surely also room for learning from the mistakes (and the expertise!) of others.

The book covers a lot of other territory: from the potential for big data analytics to transform healthcare to the changing role of the patient (and the evolving clinician–patient relationship) and the cultural context within which all the changes are taking place. I hope that Wachter’s concluding optimism is well founded. It’s going to be a long, hard road from here to there that will require a significant cultural shift in healthcare, and across society. This book really brought home to me some of the limitations of “user centred design” in a world that is trying to achieve such transformational change in such a short period of time, with everyone having to just muddle through. This book should be read by everyone involved in the procurement and deployment of new electronic health record systems, and by their patients too... and of course by healthcare policy makers: we can all learn from the successes and struggles of the US health system.

Sunday, 24 May 2015

Digital Health: tensions between hype and reality

There are many articles predicting an amazing future for digital technologies for healthcare: wearables, implantables, wirelessly enabled to gather vital signs information and collate it for future sensemaking, by the individual, clinicians and population health researchers. Two examples that have recently come to my attention are a video by the American Psychiatry Association and a report by Deloitte. The possibilities are truly transformational.

Meanwhile, I recently visited a friend who has Type II diabetes. On his floor, half hidden by a table, I spotted what I thought was a pen lid. It turned out to be the top of his new lancing kit. Although he had been doing blood glucose checks daily for well over a decade, he hadn't done one for over two weeks. Not just because he'd lost an essential part of the equipment, but because he'd been prescribed the new tool and hadn't been able to work out how to use it. So losing part of it wasn't a big deal: it was useless to him anyway. He told me that when he'd reported his difficulties to his clinician, he'd... been prescribed a second issue of exactly the same equipment. So now he has three sets of equipment: the original (AccuChek) lancing device and blood glucose meter, which he has used successfully for many years, but which he can't use now because he doesn't have spare consumables; and two lancing devices and meters (from a different manufacturer), with plenty of spare consumables, which he can't use because he finds the lancing device too difficult to use. And in trying to work out with him what the problem was, we managed to break one of them. Good thing he's got a spare!

If we think it's just isolated individuals who struggle, it's not: a recent report from Forbes reports similar issues at scale: poor usability of electronic health records and patient portals that are making work less efficient and effective rather than more.

So on the one hand we have the excitement of future possibilities that are gradually becoming reality for many people, and on the other hand we have the lived experiences of individuals. And for some people, change is not necessarily good. The real challenge is to design a future that can benefit all, not just the most technology-savvy in society.

Saturday, 7 February 2015

Designing: the details and the big picture

I was at a meeting this week discussing developments to the NHS Choices site. This site is an amazing resource, and the developers want to make it better, more user-centred. But it is huge, and has huge ambitions: to address a wide variety of health-related needs, and to be accessible by all.

But of course. we are not all the same: we have different levels of knowledge, different values, needs, and ways of engaging with our own health. Some love to measure and track performance (food intake, weight, blood pressure, exercise, sleep, mood: with wearable devices, the possibilities are growing all the time). Others prefer to just get on with life and react if necessary.

We don't all choose to consume news in the same way (we read different papers, track news through the internet, TV or radio, or maybe not at all); similarly, we don't all want health information in the same form or "voice". And it is almost impossible to consider all the nuanced details of the design of a site that is intended to address the health needs of "everyone" while also maintaining a consistent "big picture". Indeed, if one imagines considering every detail, the task would become overwhelmingly large. So some "good enough" decisions have to be made.

I am very struck by the contrast between this, as an example of interaction design where there is little resource available to look at details, and the course that my daughter is doing at the moment, which has included a focus on typographical design. In the course, they are reviewing fine details of the composition and layout of every character. Typography is a more mature discipline than interaction design, and arguably more tractable (it's about the graphics and the reading and the emotional response). I hope that one day interaction design will achieve this maturity, and that it will be possible to have the kind of mature discourse about both the big picture and the details of users, usability and fitness for purpose.


Friday, 9 January 2015

Compliance, adherence, and quality of life

My father-in-law used to refuse presents on the principle that all he wanted was a "bucket full of good health". And that was something that no one is really in a position to give. Fortunately for him (and us!) he remained pretty healthy and active until his last few weeks. And this is true for many of us: that we have mercifully little experience of chronic ill health. But not everyone is so lucky.

My team has been privileged to work with people suffering from chronic kidney disease, and with their families, to better understand their experiences and their needs when managing their own care. Some people with serious kidney disease have a kidney transplant. Others have dialysis (which involves having the blood 'cleansed' every couple of days). There is widespread agreement amongst clinicians that it's best for people if they can do this at home. And the people we worked with (who are all successful users of dialysis technology at home) clearly agreed. They were less concerned, certainly in the way they talked with us, about their life expectancy than about the quality of their lives: their ability to go out (for meals, on holiday, etc.), to work, to be with their families, to feel well. Sometimes, that demanded compromise: some people reported adopting short-cuts, mainly to reduce the time that dialysis takes. And one had her dialysis machine set up on her verandah, so that she could dialyse in a pleasant place. Quality of life matters too.

The health literature often talks about "compliance" or "adherence", particularly in relation to people taking medication. There's the same concern with dialysis: that people should be dialysing according to an agreed schedule. And mostly, that seemed to be what people were doing. But sometimes they didn't because other values dominated. And sometimes they didn't because the technology didn't work as intended and they had to find ways to get things going again. Many of them had turned troubleshooting into an art! As more and more health management happens at home, which means that people are immediately and directly responsible for their own welfare, it seems likely that terms like "compliance" and "adherence" need to be re-thought to allow us all to talk about living as enjoyably and well as we can – with the conditions we have and the available means for managing those conditions. And (of course) the technology should be as easy to use and safe as possible. Our study is hopefully of interest: not just to those directly affected by kidney disease or caring or designing technology for managing it, but also for those thinking more broadly about policy on home care and how responsibility is shared between clinicians, patients and family.

Wednesday, 7 January 2015

Strategies for doing fieldwork for health technology design


The cartoons in this blog post are  from Fieldwork for Healthcare: Guidance for Investigating Human Factors in Computing Systems© 2015 Morgan and Claypool Publishers, www.morganclaypool.com. Used with permission.
One of the themes within CHI+MED has been better understanding how interactive medical devices are used in practice, recognising that there are often important differences between work as imagined and work as done.  This has meant working with many people directly involved in healthcare (clinicians, patients, relatives) to understand their work when interacting with medical devices: observing their interactions and interviewing them about their experiences. But doing fieldwork in hospitals and in people’s homes is challenging:
  • You need to get formal ethical clearance to conduct any study involving clinicians or patients. As I’ve noted previously, this can be time-consuming and frustrating. It also means that it can be difficult to change the study design once you discover that things aren’t quite the way you’d imagined, however much preparatory work you’d tried to do. 
  • Hospitals are populated by people from all walks of life, old and young, from many cultures and often in very vulnerable situations. They, their privacy and their confidentiality need to be respected at all times.
  • Staff are working under high pressure. Their work is part-planned, part-reactive, and the environment is complex: organisationally, physically, and professionally. The work is safety-critical, and there is a widespread culture of accountability and blame that can make people wary of being observed by outsiders.
  • Health is a caring profession and, for the vast majority of staff, technology use is a means to an end; the design of that technology is not of interest (beyond being a source of frustration in their work).
  • You’re always an ‘outsider’: not staff, not patient, not visitor, and that’s a role that it can be difficult to make sense of (both for yourself and for the people you’re working with).
  • Given the safety-critical nature of most technologies in healthcare, you can’t just prototype and test ‘in the wild’, so it can be difficult to work out how to improve practices through design.

When CHI+MED started, we couldn’t find many useful resources to guide us in designing and conducting studies, so we found ourselves ‘learning on the job’. And through discussions with others we realised that we were not alone: that other researchers had very similar experiences to ours, and that we could learn a lot from each other.

So we pooled expertise to develop resources to give future researchers a ‘leg up’ for planning and conducting studies. And we hope that the results are useful resources for future researchers:

  • We’ve recently published a journal paper that focuses on themes of gaining access; developing good relations with clinicians and patients; being outsiders in healthcare settings; and managing the cultural divide between technology human factors and clinical practice.
  • We’ve published two books on doing fieldwork in healthcare. The first volume reported the experiences of researchers through 12 case studies, covering experiences in hospitals and in people’s homes, in both developed and developing countries. The second volume presents guidance and advice on doing fieldwork in healthcare. The chapters cover ethical issues, preparing for the context and networking, developing a data collection plan, implementing a technology or practice, and thinking about impact.
  • Most of our work is neither pure ethnography nor pure Grounded Theory, but somewhere between the two in terms of both data gathering and analysis techniques: semi-structured, interpretivist, pragmatic. There isn’t an agreed name for this, but we’re calling them semi-structuredqualitative studies, and have written about them in these terms.

If you know of other useful resources, do please let us know!

Friday, 21 November 2014

How not to design the user experience in electronic health records

Two weeks ago, I summarised my own experience of using a research reporting system. I know (from subsequent communications) that many other researchers shared my pain. And Muki Haklay pointed me at another blog on usability of enterprise software, which discusses how widespread this kind of experience is with many different kinds of software system.

Today, I've had another experience that I think it's worth reporting briefly. I had a health screening appointment with a nurse (I'll call her Naomi, but that's not her real name). I had to wait 50 minutes beyond the appointment time before I was seen. Naomi was charming and apologetic: she was struggling with the new health record system, and every consultation was taking longer than scheduled. This was apparently only the second day that she had been using the health screening functions of the system. And she clearly thought that it was her own fault that she couldn't use it efficiently.

She was shifting between different screen displays more times than I could count. She had a hand-written checklist of all the items that needed to be covered in the screening, and was using a separate note (see right) to keep track of the measurements that she was taking. She kept apologising that this was only because the system was unfamiliar, and she was sure she'd be able to work without the checklist before long. But actually, checklists are widely considered helpful in healthcare. She was working systematically, but this was in spite of the user interactions with the health record system, which provided no support whatsoever for her tasks, and seemed positively obstructive at times. As far as I know, all the information Naomi entered into my health record was accurate, but I left her struggling with the final item: even though, as far as either of us could see, she had completed all the fields in the last form correctly, the system wasn't letting her save it, blocking it with a claim that a field (unspecified) had not been completed. Naomi was about to seek help from a colleague as I left. I don't know what the record will eventually contain about my smoking habits!

This is just one small snapshot of users' experience with another system that is not fit for purpose. Things like this are happening in healthcare facilities all over the world every day of the week. The clinical staff are expected to improvise and act as the 'glue' between systems that have clearly been implemented with minimal awareness of how they will actually be used. This detracts from both the clinicians' and the patients' experiences, and if all the wasted time were costed it would probably come to billions of £/$/€/ currency-of-your-choice. Electronic health records clearly have the potential to offer many capabilities that paper records could not, but they could be so, so much better than they are if only they were designed with their users and purposes in mind.

Saturday, 5 April 2014

Never mind the research, feel the governance

In the past 5 days, I have received and responded to:
  • 16 emails from people in the university, the REC and the hospital about one NHS ethics application that required a two-word change to one information sheet after it had already been approved by both the university and the REC - but the hospital spotted a minor problem and now it has to go around the whole cycle again, which is likely to take several weeks at least.
  • 6 emails about who exactly should sign one of the forms in a second ethics application (someone in the university or the hospital).
  • 12 emails about the set of documents (I lost count of what's needed past 20 items) needed for a third application.
I dread to think what the invisible costs of all these communications and actions are, when scaled up to all the people involved in the process (and my part is a small one because I delegate most of the work to others), and to all the ethics applications that are going on in parallel.

I thought I was getting to grips with the ethics system for the NHS; I had even thought that it was getting simpler, clearer and more rational over time. But recent experiences show otherwise. This is partly because we're working with a wider range of hospitals than previously, and every one seems to have its own local procedures and requirements. Some people are wonderful and really helpful; others seem to consider it to be their job to find every possible weakness and block progress. I have wondered at times whether this is because we are not NHS employees (or indeed even trained clinicians). But it seems not: clinical colleagues report similar problems; in fact, they've put a cost on the delays that they have experienced through the ethical clearance process. Those costs run into hundreds of thousands of pounds. We don't do research to waste money like this, but to improve the quality and safety of patient care.

Today, there's an article in the Guardian about the under-resourcing of the health service and the impact this is having on patient care. Maybe I'm naive, but if the inefficiencies that we find in the process of gaining permission to conduct a research study in the NHS are replicated in all other aspects of health service delivery, it's no wonder they feel under-resourced.

Tuesday, 1 April 2014

Looking for the keys under the lamp post? Are we addressing the right problems?

Recently, I received an impassioned email from a colleague: "you want to improve the usability of the bloody bloody infusion pump I am connected to? give it castors and a centre of gravity so I can take it to the toilet and to get a cup of coffee with ease". Along with photos to illustrate the point.

He's completely right: these are (or should be) important design considerations. People still want to live their lives and have independence as far as possible, and that's surely in the interests of staff as well as patients and their visitors.

In this particular case, better design solutions have been proposed and developed. But I've never seen one of these in use. I've seen plenty of other improvised solutions such as the bed-bound patient being wheeled from one ward to another with a nurse walking alongside holding up the bag of fluid while the pump is balanced on the bed with the patient.

Why don't hospitals invest in better solutions? I don't know. Presumably because the problem is invisible to the people who make purchasing decisions, because staff and patients are accustomed to making do with the available equipment, and because better equipment costs more but has minimal direct effect on patient outcomes.

An implication of the original message is that in CHI+MED we're addressing the wrong problem: that in doing research on interaction design we're missing the in-your-face problem that the IV pole is so poorly designed. That we're like the drunk looking for the keys under the lamp post because that's where the light is, when in fact the keys got dropped somewhere else. Others who claim that the main problem in patient safety is infection control are making the same point: we're focusing our attention in the wrong place.

I wish there were only one problem to solve – one key to be found, under the lamp post or elsewhere. But that's not the case. In fact, in healthcare there are so many lost keys that they can be sought and found all over the place. Excuse me while I go and look for some more...



Thursday, 27 March 2014

Mind the gap: the gulfs between idealised and real practice

I've given several talks and written short articles about the gap between idealised and real practice in the use of medical devices. But to date I've blurred the distinctions between concerns from a development perspective and those from a procurement and use perspective.

Developers have to make assumptions about how their devices will be used, and to design and test (and build safety cases, etc.) on that basis. Their obligation (and challenge) is to make the assumptions as accurate as possible for their target market segment. And to make the assumptions as explicit as possible, particularly for subsequent purchasing and use. This is easier said than done: I write as someone who signed an agreement on Tuesday to do a pile of work on our car, most of which was required but part of which was not; how the unnecessary work got onto the job sheet I do not know, but because I'd signed for it, I had to pay for it. Ouch! If I can accidentally sign for a little bit of unnecessary work on the car, how much easier is it for a purchasing officer to sign off for unnecessary features, or slightly inappropriate features, on a medical device? [Rhetorical question.]

Developers have to work for generic market segments, whether those are defined by the technological infrastructure within which the device sits, the contexts and purposes for which the device will be used, the level of training of its users, or all of the above. One device probably can't address all needs, however desirable 'consistency' might be.

In contrast, a device in use has to fit a particular infrastructure, context, purpose, user capability... So there are many knowns where previously there were unknowns. And maybe the device fits well, and maybe it doesn't. And if it doesn't, then something needs to change. Maybe it was the wrong device (and needs to be replaced or modified); maybe it's the infrastructure or context that needs to be changed; maybe the users need to be trained differently / better.

When there are gaps (i.e., when technology doesn't fit properly), people find workarounds. We are so ingenious! Some of the workarounds are mostly positive (such as appropriating a tool to do something it wasn't designed for, but for which it serves perfectly well); some introduce real vulnerabilities into the system (by violating safety features to achieve a short-term goal). When gaps aren't even recognised, we can't even think about them or how to design to bridge them. We need to be alert to the gaps between design and use.

Sunday, 16 March 2014

Collaborative sensemaking under uncertainty: clinicians and patients

I've been discussing a couple of 'conceptual change' projects with clinicians, both of them in topic areas (pain management and contraception) where the clinical details aren't necessarily well understood, even by most clinicians. I have been struck by a few points that seem to me to be important when considering the design of new technologies to support people in managing their health:
  1. Different people have different basic conceptual structures onto which they 'hang' their understanding. The most obvious differences are between health professionals (who have received formal training in the subject) and lay people (who have not), but there are also many individual differences. In the education literature, particularly building on the work of Vygotsky, we find ideas of the 'Zone of Proximal Development' and of 'scaffolding'. The key point is that people build on their existing understanding, and ideas that are too far from that understanding, or are expressed in unfamiliar terms, cannot be assimilated. In the sensemaking literature, Klein discusses this in terms of 'frames', while Pirolli, Card and Russell discuss the process of making sense of new information in terms of how people look for and integrate new information with existing understanding guided by the knowledge gaps of which they are aware. In all of these literatures, and others, it's clear that any individual starts from their current understanding and builds on it, and that significant conceptual change (throwing out existing ideas and effectively starting again from scratch) is difficult. This makes it particularly challenging to design new technologies that support sensemaking because it's necessary to understand where someone is starting from in order to design systems that support changing understanding.
  2. One of the important roles of clinicians is to help people to make sense of their own health. In the usual consultation, this is a negotiative process, in which common ground is achieved – e.g., by the clinician having a repertoire of ways of assessing the patient's current understanding and building on it. The clinician's skills in this context are not well understood, as a far as I'm aware.
  3. For many patients, the most important understanding is 'what to do about it': it's not to get the depth of understanding that the clinician has, but to know how to manage their condition and to make appropriately informed decisions. Designing systems to support people in obtaining different depths and types of understanding is an exciting challenge.
  4. Health conditions can be understood at many different levels of abstraction (from basic chemistry and biology through to high-level causal relations), and we seem to employ metaphors and analogies to understand complex processes. Inevitably, these have great value, but also break down when pushed too far. There's probably great potential in exploring the use of different metaphors and explanations to support people in managing their health.
  5. As people are being expected to take more responsibility for their own health, there's a greater onus on clinicians to support patients' understanding. Clinicians may have particular understanding that they want to get across to patients, but it needs to be communicated in different ways for different people. And we need to find ways of managing the uncertainty that still surrounds much understanding of health (e.g. risks and side-effects).
All these points make it essential to consider Human Factors in the design of technologies to support conceptual change, behavioural change and decision making in healthcare, so that we can close the gap between clinicians' and patients' understanding in ways that work well for both.

Thursday, 31 October 2013

Different ways of interacting with an information resource

I'm at a workshop on how to evaluate information retrieval systems, and we are discussing the scope of concern. What is an IR system, and is the concept still useful in the 21st Century, where people engage with information resources in many different ways? The model of information seeking in sessions for a clear purpose still holds for some interactions, but it's certainly not the dominant practice any more.

I was struck when I first used the NHS Choices site that it encourages exploration above seeking: it invites visitors to consume health information that they hadn't realised that they might be interested in. This is possible with health in a way that it might not be in some other areas because most people have some inherent interest in better understanding their own health and wellbeing. At least some of the time! Such sites encourage unplanned consumption, hopefully leading to new understanding, without having a particular curriculum to impart.

On the way here, I read a paper by Natalya Godbold in which she describes the experiences of dialysis patients. One of the points she makes is that people on dialysis exploit a wide range of information resources in managing their condition – importantly, including how they feel at the time. This takes embodied interaction into a new space (or rather, into a space in which it has been occurring for a long time without being noticed as such): the interaction with the technology affects and is informed by the experienced effects that flow (literally as well as metaphorically) through the body. And information need, acquisition, interpretation and use are seamlessly integrated as the individual monitors, makes sense of and manages their own condition. The body, as well as the world around us, is part of the ecology of information resources we work with, often without noticing.

While many such resources can't be "designed", it's surely important to recognise their presence and value when designing explicit information resources and IR systems.

Thursday, 10 October 2013

Safety: the top priority?

For the past two days, I've been at the AAMI Summit on Healthcare Technology in Nonclinical Settings. The talks have all been short and to-the-point, and generally excellent. They included talks from people with first-hand experience of living with or caring for someone with a long term condition, as well as developers, researchers... but no regulators, because of the US government shutdown. For many of the participants, the more memorable talks have been the first hand accounts of living with medical devices and of the things people do and encounter. I'll change names, but the following are some examples.

Megan's partner is on oxygen therapy. The cylinders are kept in a cupboard near an air conditioning unit. One day, a technician visited to fix something on the aircon unit. As he was working, she heard a sound like a hot air balloon. She stopped him just in time: he had just ignited a blow-torch. Right next to the oxygen cylinders. Naked flames and oxygen are an explosive combination. In this case, the issue was one of ignorance: the cylinders weren't sufficiently clearly labelled for the technician to realise what they were. However, there are also accounts of people on oxygen therapy smoking; in some cases, people continued to smoke even after suffering significant burns. That's not ignorance; it's a choice they make. Apparently, the power of the cigarette is greater than safety considerations.

Fred's son has type 1 diabetes. He was being bullied at school, to a degree that he found hard to bear. He took to poking a pencil into his insulin pump to give himself an excess dose, causing hypoglycemia so that his parents would be called to take him home (or, in more serious cases, to hospital). Escaping being bullied was more important than suffering the adverse effects of hypoglycemia.

In our own studies, we have found people making tradeoffs such as these. The person with diabetes who avoids taking their glucose meter or insulin on a first date because he doesn't want the new girlfriend to know about the diabetes until they have got to know each other (as people) a bit better first. The person on home haemodialysis who chooses to dialyse on her veranda even though the dialysate doesn't work well when it is cold, so she needs to use a patio heater as well. The veranda is a much more pleasant place to be than indoors, so again she's making a tradeoff.

Patient safety is a gold standard. We have institutes and agencies for patient safety. It's incumbent on the healthcare system (clinicians, manufacturers, regulators, etc.) to minimise the risks to patients of their treatment, while recognising that risks can't be eliminated. But we also need to remember that patients are also people. And as people we don't always prioritise our own safety. We drive fast cars; we enjoy dangerous sports; we dodge traffic when crossing the road; etc. We're always making tradeoffs between safety and other values. That doesn't change just because someone's "a patient".