Showing posts with label situated use. Show all posts
Showing posts with label situated use. Show all posts

Wednesday, 28 November 2018

Palliative care technology (professional interest meets intensely personal experience)

About 10 years ago, when I first started working on infusion devices, I met a medical director who did a lot of work in hospices; he noted that the motors in the syringe drivers in use at that time hummed gently while delivering medication, and that many families hated the constant reminder that this meant that their loved one was on end-of-life care.

Recently, I have experienced this at first-hand, except that the syringe driver being used was mercifully quiet, and did nothing to remind us of its presence. It only really featured when Dad (now very peacefully sleeping) had to be turned to a different position, when the care professionals had to take care not to occlude or dislodge the line. And yet this simple device had huge emotional import: it still, silently, announced that the end of a life was near. It was exactly the ending that we had agreed we would want if possible: peaceful, not disrupted by any invasive or disruptive interventions, with family around. And yet I still found myself wanting to remove the driver because it signified a conscious decision, or determination, that Dad was indeed going to die. Maybe if I removed the driver then Dad would spring back into life. So I find myself with very mixed emotions about the driver: gratitude that it did indeed contribute to a peaceful, pain-free ending combined with distress that it announced and determined the inevitability of that ending.

As a technology professional, I of course also found the device interesting: the nurse who set it up did so with great care, and clearly found it easy to use: it is a task she performs routinely. But the three aspects that we highlight in our paper on "Bags, Batteries and Boxes" all came up in the conversation around the driver. The disposable bag provided was identical to the one featured on the left in Figure 1 of our paper (though all it did was notionally hide the driver which was, in any case, hidden under the sheet). The nurse replaced the battery at the start and after 24 hours to minimise the risk of it running out of charge. The box was locked to prevent tampering (correct) but, bizarrely, when it came to removing the driver after Dad's death, I was the only person in the room who knew where the key was located, which rather undermined its role as protection against tampering. Since no nurse visited after Dad's death and I didn't want him to be moved while still attached to said driver, I asked the doctor to remove the butterfly needle. Clearly, the doctor had never done such a thing before, reinforcing findings from our study of incident reports involving syringe drivers used in private homes that doctors are sometimes put in the position of having to use technology they have no familiarity with. Thankfully, the doctor did kindly remove the line, gently as if removing from a living patient, and we could send Dad off suitably clothed and unencumbered by redundant technology. I can only assume that the driver was returned to the community nurse team later.

I'll close by thanking the amazing staff at Tegfield House, who cared so diligently for both Dad and us and the equally amazing NHS nurses and doctors who cared for Dad over many years, and particularly in his final hours.

Friday, 7 April 2017

If the user can’t use it, it doesn’t work: focusing on buying and selling


"If the user can’t use it, it doesn’t work": This phrase, from Susan Dray, was originally addressed at system developers. It presupposes good understanding of who the intended users are and what their capabilities are. But the same applies in sales and procurement.

In hospital (and similar) contexts, this means that procurement processes need to take account of who the intended users of any new technology are. E.g., who are the intended users of new, wireless integrated glucometers or of new infusion pumps that need to have drug libraries installed, maintained... and also be used during routine clinical care? What training will they need? How will the new devices fit into (or disrupt) their workflow? Etc. If any of the intended users can’t use it then the technology doesn’t work.

I have just encountered an analogous situation with some friends. These friends are managing multiple clinical conditions (including Alzheimer’s, depression, the after-effects of a mini-stroke, and type II diabetes) but are nevertheless living life to the full and coping admirably. But recently they were sold a sophisticated “Agility 3” alarm system, comprising a box on the wall with multiple buttons and alerts, a wearable “personal attack alarm”, and two handheld controllers (as well as PIR sensors, a smoke alarm and more). They were persuaded that this would address all their personal safety and home security needs. I don’t know whether the salesperson referred directly or obliquely to any potential physical vulnerability. But actually their main vulnerability was that they no longer have the mental capacity to assess the claims of the salesperson, let alone the capacity to use any technology that is more sophisticated than an on/off switch. If the user can’t use it, it doesn’t work. By this definition, this alarm system doesn’t work. Caveat emptor, but selling a product that is meant to protect people when the net effect is to further expose their vulnerability is crass miss-selling. How ironic!

Tuesday, 15 November 2016

Making time for mindfulness

You can't just design a new technology and assume people will use it. The app stores are littered with apps that are used once, or not at all. It's important to understand how people fit technologies into their lives (and how the design of the technology affects how it's used). We choose to use apps (or to be open to responding to them) in ways that depend on time and place. For example, on the train in the morning, lots of commuters seem to be accessing news via apps: it's a good opportunity to catch up with what's happening in the world, and my journey's an appropriate length of time to do that in.

We've recently published a paper on how people make time for mindfulness practices.
Participants were mostly young, urban professionals (so possibly not representative of a more general population!), and their big challenge was how to fit meditation practices in their busy lives. Mindfulness is difficult to achieve on a commute, for example, so people need to explicitly make time for it, in a place that feels right. There was a tension between making it part of a routine (and something that "has to be done" and making it feel like a choice (spontaneous?). But there were lots of other factors that shape when, how and whether people used the mindfulness app, such as their sense of self-efficacy (how much they feel in control of their lives), their mood (mindfulness when your upset or angry just isn't going to happen – not in ten minutes, anyway), and attitudes of friends to mindfulness (peer pressure is very powerful).

Some of these are factors that can't be designed for – beyond recognising that a mindfulness app isn't going to work for all people, or in all situations. Others can, perhaps, be designed for: such as managing people's expectations of what differences mindfulness might make in their lives, and giving guidance on when and how to fit in app use. What are some of the take-homes?
  • that incidental details (like the visual appearance or the sound of someone's voice) matter;
  • that people are one a 'journey' of learning how to practice mindfulness (don't force an expert to start at the beginning just because they haven't used this particular app before, for example);
  • that people need to learn how to fit app use and mindfulness into their lives, and expectations need to be managed; and
  • that engaging with the app isn't the same as engaging with mindfulness... but the one can be a great support for the other in the right circumstances.
 





Wednesday, 17 August 2016

Reflections on two days in a smart home

I've just had the privilege of spending two days in the SPHERE smart home in Bristol. It has been an interesting experience, though much less personally challenging than I had expected. For example, it did not provoke the intensity of reaction from me that wearing a fitbit did. What have I learned? That a passive system that just absorbs data that can't be inspected or interacted with by the occupier quickly fades into the background, but that it demands huge trust of the occupant (because it is impossible to anticipate what others can learn about one's behaviour from data that one cannot see). And that as well as being non-threatening, technology has to have a meaningful value and benefit to the user.

Reading the advance information about staying in the SPHERE house, I was reassured that they have considered safety and privacy issues well. I wasn't sure what to expect of the wearable devices or how accurate they would be. My experience of wearing a fitbit previously had left me with low expectations of accuracy. I anticipated that wearing devices in the house might make me feel like a lab rat, and I was concerned about wearing anything outside the house. It turned out that the only wearable was on the wrist, and was only worn in the house anyway, so less obtrusive than commercial wearables.

I had no idea of what interaction mechanisms to expect: I expected to be able to review the data that is being gathered in real time and wondered whether I would be able to draw any inferences from that data. Wrong! The data was never available for inspection, because of the experimental status of the house at the time.

When we arrived, it was immediately obvious that the house is heavily wired, but most of the technology is one-way (sucking information without giving anything back to the participant). Most of the rooms are quite sparse and magnolia. The dining room feels very high-tech, with wires and chips and stuff all over the place – more like a lab than a home. To me, this makes that room a very unwelcoming place to be, so we chose to eat dinner in the living room.

I was much more aware of the experimental aspects of the data gathering (logging our activities) than of the lifestyle (and related) monitoring. My housemate seemed to be quite distracted by the video recording for a while; I was less distracted by it than I had expected. The fact that I cannot inspect the data means that I have no option to reflect on it, so it quickly became invisible to me.
 
The data gathering that we did manually was meant to be defining the ‘ground truth’, but with the best will in the world I’m not sure how accurate the data we’ll provide was – we both keep forgetting to carry the phones everywhere with us, and kept forgetting to start new activities or finish completed one. Recording activities involves articulating the intention to do something (such as making a hot drink or putting shopping away) just before starting to do it, and then articulating that it has been finished when it’s over. This isn't natural! Conversely, at one point, I happened to put the phone on a bedside table and accidentally started logging "sleep" through the NFC tag!

By day 2, I was finding little things oppressive: the fact that the light in the toilet didn’t work and neither did the bedside lights; the lack of a mirror in the bedroom; the fact that everything is magnolia; and the trailing wires in several places around the house. I hadn't realised how important being "homely" was to me, and small touches like cute doorstops didn't deliver.

To my surprise, the room I found least private (even though it had no video) was the toilet: the room is so small and the repertoire of likely actions so limited that it felt as if the wearable was transmitting details that would be easily interpreted. I have no way of knowing whether this is correct (I suspect it is not).

At one point, the living room got very hot so I had to work out how to open the window; that was non-trivial and involved climbing on the sofa and the window sill to work out how it was secured. I wonder what that will look like as data, but at least we had fresh air! 

By the time we left, I was getting used to the ugliness of the technology, and even to the neutrality of the house colours. I had moved things around to make life easier – e.g., moving the telephone off my bedside table to make space for my water and phone (though having water next to the little PCB felt like an accident waiting to happen).

My housemate worked with the SPHERE team to visualize some data from three previous residents that showed that all three of them had eaten their dinners in the living room rather than the dining room. We both seemed to find this slightly amusing, but also affirming: other people are making the same decision as we did.

The main issue to me was that the ‘smart’ technology had no value to me as an inhabitant in the house in its current experimental state. And I would really expect to go beyond inspectability of data to interactivity before the value becomes apparent. Even then, I’m not sure whether the value is short- or long-term: is it about learning about health and behaviours in the home, or is it about real-time monitoring and alerting for health management? The long-term value will come with the latter; for the former, people might just want a rent-a-kit that allows them to learn about their behaviours and adapt them over maybe 2-3 months. But this is all in the future. The current home is a prototype to test what is technically possible. The team have paid a lot of attention to privacy and trust, but not much yet to value. That's going to be the next exciting challenge...

Friday, 4 March 2016

What's in it for me? The challenges of designing interventions for others

"Uninvited guests" is an entertaining short video showing possible, compelling, responses to well-meaning digital interventions for wellbeing that an elderly relative is encouraged to use.

Recently, a friend (I'll call her Hanna) told me about her experience of something similar, and it highlighted to me just how challenging it is to design well to help others to live well, and how important it is to make new designs of direct value, and easy to understand.

Hanna's parents are elderly, and had been plagued by nuisance calls: some just irritating, but others that involved mis-selling, "fixing" a computer virus, or otherwise leaving her parents feeling unsettled and cheated. She wanted to work with them to help avoid these calls. They installed Truecall on the line. And for a couple of weeks, it seemed to be working really well: letting through trusted callers while blocking unknown callers. A couple of unrecognised callers contacted her to request access and she extended the list of trusted callers in response. All good!

Then things started to unravel. An elderly acquaintance who wasn't on the list tried calling, did not understand the 'blocking message' immediately, and promptly drove round to her parents' house to ask what was going on. They found this really embarrassing, and it undermined their trust in the system. Hanna worked with her parents to add every known acquaintance to the list of trusted callers. But their fear of missing even one 'real' call had been triggered. At least: that was the surface presentation; I suspect there was more going on.

Apparently, when adding names to the list of trusted callers, Hanna's parents talked about the data entry as if they would then be able to use the list as a phone book. That would have been useful to them. But of course it didn't have that functionality (it's a call blocker, not a call enabler). They had a poor mental model of how Truecall worked and what it did. I'm guessing that this lack of understanding made them feel alienated and disempowered.

Hanna showed her parents their own call log, highlighting all the nuisance calls that had been blocked, and that therefore had not been disruptive. But this was apparently not persuasive at all: they could not remember the occasions where they had been persuaded by mis-selling, and now the concern about missing genuine calls dominated completely. Indeed, Hanna's parents seemed to grow in confidence regarding their ability to manage nuisance calls with every day that passed, and Truecall seemed to become a device that questioned their competence.

They told her about one of their friends also using Truecall. But she tells me she couldn't work out whether this was a positive comment (this is catching on; we're ahead of the curve) or a negative comment (that friend is getting old and having difficulty screening nuisance calls).

At one level, Truecall is a technology that does one job and seems to do it very well. At another level, it is a social device. The fact that them using Truecall was visible to a few of their friends and acquaintances seems to have made it unacceptable, even "embarrassing". I'm guessing it is preferable to them to be autonomous, to feel in control, and not to be seen to be using a call blocker, than to avoid nuisance calls. 
 
We all use technologies that we don’t fully understand. But we need to understand them well enough to feel in control, and it seems as if Truecall went beyond that for these elderly people and their equally elderly friends.

Truecall has had rave reviews, and it really does seem to do its job very well. So it was a surprise to me when Hanna told me about her and her parents' experiences. Maybe, even though I'm pretty sure that Hanna's parents are in the target market segment for Truecall, for something to work for Hanna's parents it would have to be even easier to use, even more transparent. I'm guessing it would have needed the following features:
  • everything accessible without obviously accessing the internet (so, visible on a dedicated display with the phone).
  • offering the 'phone book' capability so that they could more easily make calls.
  • having three call categories that are simultaneously enabled: trusted (come straight through); zapped (blocked, including all withheld numbers); and unknown (with a really easy way to move unknown numbers into trusted or zapped, whether before or after accepting the call).
I'm not sure that this is technically possible at the moment – or if it is, it might be prohibitively expensive to implement. But hopefully it will be possible in the near future. For me, the most important insight is that there are some very subtle emotional and social values that tip a technology from being something to engage with to being something that is rejected. In the uninvited guests video, the star of the show is technology savvy enough to subvert the best of intentions of his family and of the technology design; in Hanna's case, it seems that the only option for her parents was to reject the technology completely. We still have a lot to learn about how to design technology that is truly empowering.

Monday, 31 August 2015

The Digital Doctor


I’ve just finished reading The DigitalDoctor by Robert Wachter. It’s published this year, and gives great insight into the US developments in electronic health records, particularly over the past few years: Meaningful Use and the rise of EPIC. The book manages to steer a great course between being personal (about Wachter’s career and the experiences of people around him) and drawing out general themes, albeit from a US perspective. I’d love to see an equivalent book about the UK, but suspect there would be no-one qualified to write it.

The book is simultaneously fantastic and slightly frustrating. I'll deal with the frustrating first: although Wachter claims that a lot of the book is about usability (and indeed there are engaging and powerful examples of poor usability that have resulted in untoward incidents), he seems unaware that there’s an entire discipline devoted to understanding human factors and usability, and that people with that expertise could contribute to the debate: my frustration is not with Wachter, but with the fact that human factors is apparently still so invisible, and there still seems to be an assumption that the only qualification that is needed to be an expert in human factors is to be a human.

The core example (the overdose of a teenage patient with 38.5 times the intended dose of a common antibiotic) is told compellingly from the perspectives of several of the protagonists:

    poor interface design leads to the doctor specifying the dose in mg, but the system defaulting to mg/kg and therefore multiplying the intended dose by the weight of the patient;

    the system issues so many indistinguishable alerts (most very minor) that the staff become habituated to cancelling them without much thought – and one of the reasons for so many alerts is the EHR supplier covering themselves against liability for error;

    the pharmacist who checked the order was overloaded and multitasking, using an overly complicated interface, and trusted the doctor;

    the robot that issued the medication had no ‘common sense’ and did not query the order;

    the nurse who administered the medication was new and didn’t have anyone more senior to quickly check the prescription with, so assumed that all the earlier checks would have caught any error, so the order must be correct;

    the patient was expecting a lot of medication, so didn’t query how much “a lot” ought to be.
This is about design and culture. There is surprisingly little about safer design from the outset (it’s hardly as if “alert fatigue” is a new phenomenon, or as if the user interface design and confusability of units is surprising or new): while those involved in deploying new technology in healthcare should be able to learn from their own mistakes, there’s surely also room for learning from the mistakes (and the expertise!) of others.

The book covers a lot of other territory: from the potential for big data analytics to transform healthcare to the changing role of the patient (and the evolving clinician–patient relationship) and the cultural context within which all the changes are taking place. I hope that Wachter’s concluding optimism is well founded. It’s going to be a long, hard road from here to there that will require a significant cultural shift in healthcare, and across society. This book really brought home to me some of the limitations of “user centred design” in a world that is trying to achieve such transformational change in such a short period of time, with everyone having to just muddle through. This book should be read by everyone involved in the procurement and deployment of new electronic health record systems, and by their patients too... and of course by healthcare policy makers: we can all learn from the successes and struggles of the US health system.

Saturday, 22 August 2015

Innovation for innovation's sake?

As Director of the UCL Institute of Digital Health, my job is to envision the future. The future is fueled by innovation and vision. And there's plenty of that around. But the reality is much more challenging: as summarised in a recent blog post, most people aren't that interested in engaging with their health data (the ones who are most likely to be tracking their data are young, fit and wealthy), and most clinicians are struggling to even do their basic (reactive) jobs, without having much chance to think about the preventative (proactive) steps they might be taking to help people manage their health.

Why might this be? Innovation is creative and fun. It's also essential (without it, we'd still be wallowing around in the primordial soup). But there's a tendency for innovation to assume a world that is simpler than the real world: people who are engaged and compliant and have time to take up the innovation. Innovation tends not to engage with the inconvenient truths of real life, or to tackle the difficult and complex challenges that get in the way of simple visions.

We need a new approach to innovation: one that takes the really difficult challenges seriously, that accepts that the rate of progress may be slow, that recognises that it's much harder to change people and cultural practices than it is to change technology, but that these all need to be aligned for innovation to really work.

We need innovation that works with and for people. And we need to recognise that an important part of innovation is dealing with the inconvenient and difficult problems that seem to beset healthcare delivery, in all its forms.

Sunday, 24 May 2015

Digital Health: tensions between hype and reality

There are many articles predicting an amazing future for digital technologies for healthcare: wearables, implantables, wirelessly enabled to gather vital signs information and collate it for future sensemaking, by the individual, clinicians and population health researchers. Two examples that have recently come to my attention are a video by the American Psychiatry Association and a report by Deloitte. The possibilities are truly transformational.

Meanwhile, I recently visited a friend who has Type II diabetes. On his floor, half hidden by a table, I spotted what I thought was a pen lid. It turned out to be the top of his new lancing kit. Although he had been doing blood glucose checks daily for well over a decade, he hadn't done one for over two weeks. Not just because he'd lost an essential part of the equipment, but because he'd been prescribed the new tool and hadn't been able to work out how to use it. So losing part of it wasn't a big deal: it was useless to him anyway. He told me that when he'd reported his difficulties to his clinician, he'd... been prescribed a second issue of exactly the same equipment. So now he has three sets of equipment: the original (AccuChek) lancing device and blood glucose meter, which he has used successfully for many years, but which he can't use now because he doesn't have spare consumables; and two lancing devices and meters (from a different manufacturer), with plenty of spare consumables, which he can't use because he finds the lancing device too difficult to use. And in trying to work out with him what the problem was, we managed to break one of them. Good thing he's got a spare!

If we think it's just isolated individuals who struggle, it's not: a recent report from Forbes reports similar issues at scale: poor usability of electronic health records and patient portals that are making work less efficient and effective rather than more.

So on the one hand we have the excitement of future possibilities that are gradually becoming reality for many people, and on the other hand we have the lived experiences of individuals. And for some people, change is not necessarily good. The real challenge is to design a future that can benefit all, not just the most technology-savvy in society.

Friday, 9 January 2015

Compliance, adherence, and quality of life

My father-in-law used to refuse presents on the principle that all he wanted was a "bucket full of good health". And that was something that no one is really in a position to give. Fortunately for him (and us!) he remained pretty healthy and active until his last few weeks. And this is true for many of us: that we have mercifully little experience of chronic ill health. But not everyone is so lucky.

My team has been privileged to work with people suffering from chronic kidney disease, and with their families, to better understand their experiences and their needs when managing their own care. Some people with serious kidney disease have a kidney transplant. Others have dialysis (which involves having the blood 'cleansed' every couple of days). There is widespread agreement amongst clinicians that it's best for people if they can do this at home. And the people we worked with (who are all successful users of dialysis technology at home) clearly agreed. They were less concerned, certainly in the way they talked with us, about their life expectancy than about the quality of their lives: their ability to go out (for meals, on holiday, etc.), to work, to be with their families, to feel well. Sometimes, that demanded compromise: some people reported adopting short-cuts, mainly to reduce the time that dialysis takes. And one had her dialysis machine set up on her verandah, so that she could dialyse in a pleasant place. Quality of life matters too.

The health literature often talks about "compliance" or "adherence", particularly in relation to people taking medication. There's the same concern with dialysis: that people should be dialysing according to an agreed schedule. And mostly, that seemed to be what people were doing. But sometimes they didn't because other values dominated. And sometimes they didn't because the technology didn't work as intended and they had to find ways to get things going again. Many of them had turned troubleshooting into an art! As more and more health management happens at home, which means that people are immediately and directly responsible for their own welfare, it seems likely that terms like "compliance" and "adherence" need to be re-thought to allow us all to talk about living as enjoyably and well as we can – with the conditions we have and the available means for managing those conditions. And (of course) the technology should be as easy to use and safe as possible. Our study is hopefully of interest: not just to those directly affected by kidney disease or caring or designing technology for managing it, but also for those thinking more broadly about policy on home care and how responsibility is shared between clinicians, patients and family.

Wednesday, 7 January 2015

Strategies for doing fieldwork for health technology design


The cartoons in this blog post are  from Fieldwork for Healthcare: Guidance for Investigating Human Factors in Computing Systems© 2015 Morgan and Claypool Publishers, www.morganclaypool.com. Used with permission.
One of the themes within CHI+MED has been better understanding how interactive medical devices are used in practice, recognising that there are often important differences between work as imagined and work as done.  This has meant working with many people directly involved in healthcare (clinicians, patients, relatives) to understand their work when interacting with medical devices: observing their interactions and interviewing them about their experiences. But doing fieldwork in hospitals and in people’s homes is challenging:
  • You need to get formal ethical clearance to conduct any study involving clinicians or patients. As I’ve noted previously, this can be time-consuming and frustrating. It also means that it can be difficult to change the study design once you discover that things aren’t quite the way you’d imagined, however much preparatory work you’d tried to do. 
  • Hospitals are populated by people from all walks of life, old and young, from many cultures and often in very vulnerable situations. They, their privacy and their confidentiality need to be respected at all times.
  • Staff are working under high pressure. Their work is part-planned, part-reactive, and the environment is complex: organisationally, physically, and professionally. The work is safety-critical, and there is a widespread culture of accountability and blame that can make people wary of being observed by outsiders.
  • Health is a caring profession and, for the vast majority of staff, technology use is a means to an end; the design of that technology is not of interest (beyond being a source of frustration in their work).
  • You’re always an ‘outsider’: not staff, not patient, not visitor, and that’s a role that it can be difficult to make sense of (both for yourself and for the people you’re working with).
  • Given the safety-critical nature of most technologies in healthcare, you can’t just prototype and test ‘in the wild’, so it can be difficult to work out how to improve practices through design.

When CHI+MED started, we couldn’t find many useful resources to guide us in designing and conducting studies, so we found ourselves ‘learning on the job’. And through discussions with others we realised that we were not alone: that other researchers had very similar experiences to ours, and that we could learn a lot from each other.

So we pooled expertise to develop resources to give future researchers a ‘leg up’ for planning and conducting studies. And we hope that the results are useful resources for future researchers:

  • We’ve recently published a journal paper that focuses on themes of gaining access; developing good relations with clinicians and patients; being outsiders in healthcare settings; and managing the cultural divide between technology human factors and clinical practice.
  • We’ve published two books on doing fieldwork in healthcare. The first volume reported the experiences of researchers through 12 case studies, covering experiences in hospitals and in people’s homes, in both developed and developing countries. The second volume presents guidance and advice on doing fieldwork in healthcare. The chapters cover ethical issues, preparing for the context and networking, developing a data collection plan, implementing a technology or practice, and thinking about impact.
  • Most of our work is neither pure ethnography nor pure Grounded Theory, but somewhere between the two in terms of both data gathering and analysis techniques: semi-structured, interpretivist, pragmatic. There isn’t an agreed name for this, but we’re calling them semi-structuredqualitative studies, and have written about them in these terms.

If you know of other useful resources, do please let us know!

Wednesday, 8 October 2014

Three steps to developing a successful app

This comment is based on studies of healthcare apps, and some recent conversations I've had, but I'm guessing it applies more widely:
  1. Consider what the 'added value' of the app is intended to be. 'Because it's the 21st century' and 'Because everyone uses apps' are not added value. What are the benefits to the user of doing something with an app rather than either doing it some other way or not doing it at all? Make sure there is clear added value.
  2. Consider how people will fit app use into their lives. Is it meant to be used when a particular trigger event happens (e.g., the user is planning to go for a run, or isn't feeling well today), or regularly (after every meal, first thing every morning, or whatever)? How will people remember to use the app when intended? Make sure the app fits people's lives and that any reminders or messages it delivers are timely.
  3. What will people's motivations for using the app be? Are there immediate intrinsic rewards or longer term benefits? Will these be apparent to users? Does there need to be an extrinsic reward, such as competing against others or gaining 'points' of some kind, or might this be counter-productive? Are there de-motivators (such as poor usability)? Make sure the app taps in to people's motivations and doesn't put obstacles in the way of people realising the envisaged rewards.
A fourth important point is to recognise that every person is different: different lifestyle, different motivations, different on many other dimensions. So there almost certainly isn't a "one size fits all" app that everyone will love and engage with. But good and appropriate design will work for at least some people.

Tuesday, 1 April 2014

Looking for the keys under the lamp post? Are we addressing the right problems?

Recently, I received an impassioned email from a colleague: "you want to improve the usability of the bloody bloody infusion pump I am connected to? give it castors and a centre of gravity so I can take it to the toilet and to get a cup of coffee with ease". Along with photos to illustrate the point.

He's completely right: these are (or should be) important design considerations. People still want to live their lives and have independence as far as possible, and that's surely in the interests of staff as well as patients and their visitors.

In this particular case, better design solutions have been proposed and developed. But I've never seen one of these in use. I've seen plenty of other improvised solutions such as the bed-bound patient being wheeled from one ward to another with a nurse walking alongside holding up the bag of fluid while the pump is balanced on the bed with the patient.

Why don't hospitals invest in better solutions? I don't know. Presumably because the problem is invisible to the people who make purchasing decisions, because staff and patients are accustomed to making do with the available equipment, and because better equipment costs more but has minimal direct effect on patient outcomes.

An implication of the original message is that in CHI+MED we're addressing the wrong problem: that in doing research on interaction design we're missing the in-your-face problem that the IV pole is so poorly designed. That we're like the drunk looking for the keys under the lamp post because that's where the light is, when in fact the keys got dropped somewhere else. Others who claim that the main problem in patient safety is infection control are making the same point: we're focusing our attention in the wrong place.

I wish there were only one problem to solve – one key to be found, under the lamp post or elsewhere. But that's not the case. In fact, in healthcare there are so many lost keys that they can be sought and found all over the place. Excuse me while I go and look for some more...



Thursday, 27 March 2014

Mind the gap: the gulfs between idealised and real practice

I've given several talks and written short articles about the gap between idealised and real practice in the use of medical devices. But to date I've blurred the distinctions between concerns from a development perspective and those from a procurement and use perspective.

Developers have to make assumptions about how their devices will be used, and to design and test (and build safety cases, etc.) on that basis. Their obligation (and challenge) is to make the assumptions as accurate as possible for their target market segment. And to make the assumptions as explicit as possible, particularly for subsequent purchasing and use. This is easier said than done: I write as someone who signed an agreement on Tuesday to do a pile of work on our car, most of which was required but part of which was not; how the unnecessary work got onto the job sheet I do not know, but because I'd signed for it, I had to pay for it. Ouch! If I can accidentally sign for a little bit of unnecessary work on the car, how much easier is it for a purchasing officer to sign off for unnecessary features, or slightly inappropriate features, on a medical device? [Rhetorical question.]

Developers have to work for generic market segments, whether those are defined by the technological infrastructure within which the device sits, the contexts and purposes for which the device will be used, the level of training of its users, or all of the above. One device probably can't address all needs, however desirable 'consistency' might be.

In contrast, a device in use has to fit a particular infrastructure, context, purpose, user capability... So there are many knowns where previously there were unknowns. And maybe the device fits well, and maybe it doesn't. And if it doesn't, then something needs to change. Maybe it was the wrong device (and needs to be replaced or modified); maybe it's the infrastructure or context that needs to be changed; maybe the users need to be trained differently / better.

When there are gaps (i.e., when technology doesn't fit properly), people find workarounds. We are so ingenious! Some of the workarounds are mostly positive (such as appropriating a tool to do something it wasn't designed for, but for which it serves perfectly well); some introduce real vulnerabilities into the system (by violating safety features to achieve a short-term goal). When gaps aren't even recognised, we can't even think about them or how to design to bridge them. We need to be alert to the gaps between design and use.

Friday, 8 November 2013

That was easy: Understanding Usability and Use

For a long time (measured in years rather than days or weeks), I've been struggling with the fact that the word "usability" doesn't seem to capture the ideas that I consider to be important. Which are about how well a device actually supports a person in doing the things they want to do.

Some time ago, a colleague (apparently despairing of me) gave me a gift: a big red button that, when you press it, announces that "That was easy". Yep: easy, but also (expletive deleted) pointless.

So if someone is given an objective ("Hey, press this button!") then ease of use is important, and this button satisfies that need. Maybe the objective is expressed less directly ("Press a red button", which would require finding the red button to press, or "Do something simple", which could be interpreted in many different ways), and the role of the "easy" button isn't so obvious. Ease of use isn't the end of the story because, while it's important that it is easy to do what you want to do, it's also important that what you want to do is something that the device supports easily. In this case, there probably aren't many people who get an urge to press an "easy" button. So it's easy, but it's not useful, or rewarding (the novelty of the "easy" button wore off pretty fast).

So it doesn't just matter that a system is usable: it also matters that that system does the things that the user wants it to do. Or an appropriate subset of those things. And in a way that makes sense to the user. It matters that the system has a use, and fits the way the user wants to use it.

That use may be pure pleasure (excite, titillate, entertain), but many pleasures (such as that of pressing an "easy" button) wear off quickly. So systems need to be designed to provide longer term benefit... like really supporting people well in doing the things that matter to them – whether in work or leisure.

Designing for use means understanding use. It means understanding the ways that people think about use. In quite a lot of detail. So that use is as intuitive as possible. That doesn't mean designing for oneself, but learning about the intended users and designing for them. And no designing things that are "easy" but inappropriate!

Thursday, 10 October 2013

Safety: the top priority?

For the past two days, I've been at the AAMI Summit on Healthcare Technology in Nonclinical Settings. The talks have all been short and to-the-point, and generally excellent. They included talks from people with first-hand experience of living with or caring for someone with a long term condition, as well as developers, researchers... but no regulators, because of the US government shutdown. For many of the participants, the more memorable talks have been the first hand accounts of living with medical devices and of the things people do and encounter. I'll change names, but the following are some examples.

Megan's partner is on oxygen therapy. The cylinders are kept in a cupboard near an air conditioning unit. One day, a technician visited to fix something on the aircon unit. As he was working, she heard a sound like a hot air balloon. She stopped him just in time: he had just ignited a blow-torch. Right next to the oxygen cylinders. Naked flames and oxygen are an explosive combination. In this case, the issue was one of ignorance: the cylinders weren't sufficiently clearly labelled for the technician to realise what they were. However, there are also accounts of people on oxygen therapy smoking; in some cases, people continued to smoke even after suffering significant burns. That's not ignorance; it's a choice they make. Apparently, the power of the cigarette is greater than safety considerations.

Fred's son has type 1 diabetes. He was being bullied at school, to a degree that he found hard to bear. He took to poking a pencil into his insulin pump to give himself an excess dose, causing hypoglycemia so that his parents would be called to take him home (or, in more serious cases, to hospital). Escaping being bullied was more important than suffering the adverse effects of hypoglycemia.

In our own studies, we have found people making tradeoffs such as these. The person with diabetes who avoids taking their glucose meter or insulin on a first date because he doesn't want the new girlfriend to know about the diabetes until they have got to know each other (as people) a bit better first. The person on home haemodialysis who chooses to dialyse on her veranda even though the dialysate doesn't work well when it is cold, so she needs to use a patio heater as well. The veranda is a much more pleasant place to be than indoors, so again she's making a tradeoff.

Patient safety is a gold standard. We have institutes and agencies for patient safety. It's incumbent on the healthcare system (clinicians, manufacturers, regulators, etc.) to minimise the risks to patients of their treatment, while recognising that risks can't be eliminated. But we also need to remember that patients are also people. And as people we don't always prioritise our own safety. We drive fast cars; we enjoy dangerous sports; we dodge traffic when crossing the road; etc. We're always making tradeoffs between safety and other values. That doesn't change just because someone's "a patient".

Friday, 6 September 2013

The look of the thing matters

Today, I was at a meeting. One of the speakers suggested that the details of the way information is displayed in an information visualisation doesn't matter. I beg to differ.

The food at lunchtime was partly finger-food and partly fork-food. Inevitably, I was talking with someone whilst serving myself, but my attention was drawn to the buffet when a simple expectation was violated. The forks looked like this:

 ...so I expected them to be weighty and solid. But the one I picked up felt like this:

– i.e., insubstantial and plastic. The metallic look and the form gave an appearance that didn't match reality.

I remember a similar feeling of being slightly cheated when I first received a circular letter (from a charity) where the address was printed directly onto the envelope using a handwriting-like font and with a "proper" stamp (queen's head and all that). Even though I didn't recognise the handwriting, I immediately expected a personal letter inside – maybe an invitation to a wedding or a party. But no: an invitation to make a donation to the charity. That's not exciting.

The visual appearance of such objects introduces a dissonance between expectation and fact, forcing us to shift from type 1 (fast, intuitive) thinking to type 2 (slow, deliberate) thinking. As the fork example shows, it's possible to create this kind of dissonance in the natural (non-digital) world. But it's much, much easier in the digital world to deliberately or accidentally create false expectations. I'm sure I'm not the only person to feel cheated when this happens.

Thursday, 18 July 2013

When reasoning and action don't match: Intentionality and safety

My team have been discussing the nature of “resilient” behavior, the basic idea being that people develop strategies for anticipating and avoiding possible errors, and creating conditions that enable them to recover seamlessly from disturbances. One of the examples that is used repeatedly is leaving one’s umbrella by the door as a reminder to take it when going out in case of rain. Of course, getting wet doesn’t seriously compromise safety for most people, but let’s let that pass: its unpleasant. This presupposes that people are able to recognize vulnerabilities and identify appropriate strategies to address them. Two recent incidents have made me rethink some of the presuppositions.

On Tuesday, I met up with a friend. She had left her wallet at work. It had been such a hot day that she had taken it out of her back pocket and put it somewhere safe (which was, of course, well hidden). She recognized that she was likely to forget it, and thought of ways to remind herself: leaving a note with her car keys, for instance. But she didn’t act on this intention. So she had done the learning and reflection, but it still didn’t work for her because she didn’t follow through with action.

My partner occasionally forgets to lock the retractable roof on our car. I have never made this mistake, but wasn’t sure why until I compared his behavior with mine. It turns out he is more relaxed than I am, and waits while the roof closes before taking the next step, which is often to close the windows, take the keys out of the lock and get out of the car. I, in contrast, am impatient. I can’t wait to lock the roof as it closes, so as the roof is coming over, my arm is going up ready to lock it. So I never forget (famous last words!): the action is automatised. The important point in relation to resilience is that I didn’t develop this behavior in order to keep the car safe or secure: I developed it because I assumed that the roof needed to be secured and I wanted it to happen as quickly as possible. So it is not intentional, in terms of safety, and yet it has the effect of making the system safer.

So what keeps the system safe(r) is not necessarily what people learn or reflect on, but what they act on. This is, of course, only one aspect of the problem; when major disturbances happen, it’s almost certainly more important to consider people’s competencies and knowledge (and how they acquired them). To (approximately) quote a London Underground controller: “We’re paid for what we know, not what we do”. Ultimately, it's what people do that matters in terms of safety; sometimes that can be clearly traced to what they know and sometime it can't.