Saturday, 13 July 2013

Parallel information universes

A few years ago, a raised white spot developed on my nose. It's not pretty, so I'm not going to post a picture of it. I didn't worry about it for a while; tried to do internet searching to work out what is was and whether I should do anything about it.

A search for "raised white spot on skin" suggested that "sebrrheic keratosis" was the most likely explanation. But I did an image search on that term and it was clearly wrong: wrong colour, wrong texture, wrong size...

"One should visit a doctor immediately when this signs arise": ignoring the grammatical problem in that advice, I booked an appointment with my doctor. She assured me that there is nothing to worry about -- that it is an "intradermal naevus", that there would be information about it on dermnetnz.org. Well, actually, no: information on Becker naevus (occurs mostly in men, has a dark pigment); on Sebaceous naevus (bright pink, like birth marks), Blue naevus (clue is in the colour)... and many other conditions that are all much more spectacular in appearance than a raised white spot. I find pages of information including words ending in "oma": melanoma, medulloblastoma, meningioma, carcinoma, lymphoma, fibroma. If the condition is serious, there is information out there about it. But the inconsequential? Not a lot, apparently. Contrary to my earlier belief, knowing the technical terms doesn't always unlock the desired information.

Look further. I find information on a patient site. But it's for healthcare professionals:  "This is a form of melanocytic naevus [...] The melanocytes do not impart their pigmentation to the lesion because they are located deep within the dermis, rather than at the dermo-epidermal junction (as is the case for junctional naevi/compound naevi)." I feel stupid: I have a PhD, but it's not in medicine or dermatology, and I have little idea what this means.

I eventually work out that naevus or nevi is another term for mole. I try searching for "white mole" and find general forums (as well as pictures of small furry creatures who dig). The forums describe something that sounds about right. But lacks clinical information, on causes or treatment or likely developments without treatment.

At that point, I give up. Lay people and clinicians apparently live in parallel universes when it comes to health information. All the challenges of interdisciplinary working that plague research projects also plague other interactions – at least when it comes to understanding white moles that are not cancerous and don't eat worms for breakfast.

Saturday, 22 June 2013

Time management tools that work (or not)

Today, I missed a lunch with friends. Oops! What happened?

My computer died (beyond repair) a couple of months ago, so I got a new one. Rather that trying to reconstruct my previous way of working, I chose to start again "from scratch", though of course that built a lot on previous practices. One of the changes I introduced was that I separated my work and leisure diaries: work is now recorded in the University Standard Diary (aka Outlook) so that managers and administrators can access my diary as needed; leisure is recorded in Google Calendar (which is what I used to use for everything).

But in practice, there's only one of me, and I only live one life. And most of my 'appointments' are work-related. So I forgot to keep looking in the leisure diary. Hence overlooking today's lunch with friends, which had been in the diary for at least six months. Because it had been in the diary for so long it wasn't "in my head". Doh!

When I was younger, life seemed simpler: if it was Monday -Friday, 9-5 (approx) then it was work time; else it was leisure time. Except holidays. Keep two diaries, one for work and one for leisure. Easy. But the boundaries between work and leisure have blurred. Personal technologies travel to work; work technologies come home; work-time and home-time have poorly defined boundaries. It's hard to keep the plans and schedules separate. But I, like most people, don't particularly want work colleagues to know the minutiae of my personal life. Yes, the work diary allows one to mark entries as "private", but:
1) that suggests that it's a "private" work event, and
2) an entry in a "work" diary is not accessible to my family, although I'd like them to be able to refer to my home diary.

The ontology of my diary is messed up: I want work colleagues to be able to access my work diary and family to be able to access my leisure diary, but actually at the heart of things I want to be able to manage my life, which isn't neatly separated into work and leisure.

Saturday, 18 May 2013

When is a medical error a crime?

I've recently had Collateral Damage recommended to me. I'm afraid I can't face reading it: just the summary is enough. Having visited Johns Hopkins, and in particular the Armstrong Institute for Patient Safety, a couple of months ago, I'm pretty confident that the terrible experience of the Walter family isn't universal, even within that one hospital, never mind nationally or internationally. And therein lies a big challenge: that there is such a wide spectrum of experiences and practices in healthcare that it's very difficult to generalise.

There are clearly challenges:
  • the demands of doing science and of providing the best quality patient care may pull in opposing directions: if we never try new things, relying on what is already known as best practice, we may not make discoveries that actually transform care.
  • if clinicians are not involved in the design of future medical technologies then how can those technologies be well designed to support clinical practice? But if clinicians are involved in their design, and have a stake in their commercial success, how can they remain objective in their assessments of clinical effectiveness?
There are no easy answers to such challenges, but clearly they are cultural and societal challenges as well as being challenges for the individual clinician. They are about what a society values and what behaviours are acceptable and/or rewarded, whether through professional recognition or financially.

I know that I have a tendency to view things positively, to argue for a learning culture rather than a blame culture. Accounts like "Collateral Damage" might force one to question that position as being naive in the extreme. For me, though, the question is: what can society and the medical establishment learn from such an account? That's not an easy question to answer. Progress in changing healthcare culture is almost imperceptibly slow: reports such as "to err is human" and "an organisation with a memory", both published over a decade ago (and the UK report now officially 'archived'), haven't had much perceptible effect. Consider, for example, the recent inquiry into failings in Mid Staffordshire.

Bob Wachter poses the question "when is a medical error a crime?". He focuses on the idea of a 'just culture': that there is a spectrum of behaviours, from the kinds of errors that anyone could make (and for which learning is a much more constructive response than blaming), through 'at risk' behaviours to 'reckless' behaviours where major risks are knowingly ignored.

The Just Culture Community notes that "an organisation's mission defines its reason for being". From a patient's perspective, a hospital's "reason for being" is to provide the best possible healthcare when needed. Problems arise when the hospital's mission is "to generate a profit", to "advance science", or any other mission that might be at odds with providing the best possible care in the short term. The same applies to individual clinicians and clinical teams within the hospital.

I find the idea of a "just culture" compelling. It is not a simple agenda, because it involves balancing learning with blame, giving a sophisticated notion of accountability. It clearly places the onus for ensuring safety at an organisational / cultural level, within which the individual works, interacts and is accountable. But it does presuppose that the different people or groups broadly agree on the mission or values of healthcare. 'Collateral Damage' forces one to question whether that assumption is correct. It is surely a call for reflection and learning: what should the mission of any healthcare provider be? How is that mission agreed on by both providers and consumers? How are values propagated across stakeholders? Etc. Assuming that patient safety is indeed valued, we all need to learn from cases such as this.

Coping with complexity in home hemodialysis

We've just had a paper published on how people who need to do hemodialysis at home manage the activity. Well done to Atish, the lead author.

People doing home hemodialysis are a small proportion of the people who need hemodialysis overall: the majority have to travel to a specialist unit for their care. Those doing home care have to take responsibility for a complex care regime. In this paper, we focus on how people use time as a resource to help with managing care. Strategies include planning to perform actions at particular times (so that time acts as a cue to perform an action); allowing extra time to deal with any problems that might arise; building in time for reflection into a plan (to minimise the risks of forgetting steps); and organising tasks to minimise the number of things that need to be thought about or done at any one time (minimising peak complexity). There is a tendency to think about complex activities in terms of task sequences, and to ignore the details of the time frame in which people carry out tasks, and how time (and our experience of time) can be used as a resource as well as, conversely, placing demands on us (e.g. through deadlines).

This study focused on particular (complex and safety-critical) activity that has to be performed repeatedly (every day or two) by people who may not be clinicians but who become experts in the task. We all do frequent tasks, whether that's preparing a meal or getting ready to go to work, that involve time management. There's great value in regarding time as a resource, to be used effectively, as well as it placing demands on us (not enough time...)

Sunday, 12 May 2013

Engineering for HCI: Upfront effort, downstream pay-back

The end of Engineering Practice 1 (c.1980).
Once upon a time, I was a graduate trainee at an engineering company. The training was organised as three-month blocks in different areas of the company. My first three months was on the (work)shop floor. Spending hours working milling machines and lathes was a bit of shock after studying mathematics at Cambridge. You mean it is possible to use your body as well as your mind to solve problems?!?
I learned that engineering was about the art of the possible (e.g. at that time you couldn't drill holes that went around corners, though 3D printing has now changed our view of what is possible). And also about managing precision: manufacturing parts that were precise enough for purpose. Engineering was inherently physical: about solving problems by designing and delivering physical artefacts that were robust and reliable and fit for purpose. The antithesis of the "trust me, I'm an engineer" view (however much that makes me smile).

Enter "software engineering": arguably, this term was coined to give legitimacy to a certain kind of computer programming. Programming was (and often still is) something of a cottage industry: people building one-off systems that seem to work, but no-one is quite sure of how, or when they might break down. Engineering is intended to reduce the variability and improve the reliability of software systems. And deliver systems that are fit for purpose.

So what does it mean to "engineer" an interactive computer system? At the most recent IFIP Working Group 2.7/13.4 meeting, we developed a video: 'Engineering for HCI: Upfront effort, downstream pay-back'. And it was accepted for inclusion in the CHI2013 Video Showcase. Success! Preparing this short video turned out to be even more difficult than I had anticipated. There really didn't seem to be much consensus on what it means to "engineer" an interactive computer system. There is general agreement that it involves some rigour and systematicity, some use of theory and science to deliver reproducible results, but does the resulting system have to be usable, to be fit for purpose? And how would one measure that? Not really clear.

I started by saying that I once worked for an engineering company. That term is probably fairly unambiguous. But I've never heard of an "interactive systems engineering company" or an "HCI engineering company". I wonder what one of those would look like or deliver.

Saturday, 27 April 2013

When I get older: the uncountable positives


Last week, I was at a presentation by John Clarkson. It was a great talk: interesting, informative, thought provoking… Part-way through it, to make a point about the need for accessible technology, he presented a set of graphs showing how human capabilities decline with age. Basically, vision, hearing, strength, dexterity, etc. peak, on average, in the 20s, and it’s downhill all the way from there. It is possible that only two measurable values increase with age: age itself and grumpiness!

So this raises the obvious question: if we peak on every important variable when we’re in our 20s, why on earth aren’t most senior roles (Chief Executive, President, etc.) held by people in their 20s? Is this because grumpiness is in fact the most important quality, or is it because older people have other qualities that make them better suited to these roles? Most people would agree that it’s the latter.

The requisite qualities are often lumped under the term “wisdom”. I’m not an expert on wisdom, but I imagine there’s a literature defining and decomposing this concept to better understand it. One thing’s for sure though: it can’t be quantified in the way that visual or auditory acuity, strength, etc. can. The things that matter most for senior roles are not easily quantified.

We run a risk, in all walks of life, of thinking that if it can’t be measured then it has no value. In research we see it repeatedly in the view that the “gold standard” for research is controlled (quantifiable) experiments, and that qualitative research is “just stories”. In healthcare, this thinking manifests itself in many ways: in measures of clinical effectiveness and other outcome measures. In HCI, it manifests itself in the weight put on efficiency: of course, efficiency has its place (and we probably all have many examples of inefficient, frustrating interfaces), but there are many cases where the less easily measured outcomes (the quality of a search, the engagement of a game) are much more important.

As vision, hearing, memory, etc. decline, I'm celebrating wisdom and valuing the unmeasurable. Even if it can sound like "just stories'.

Friday, 26 April 2013

Who's the boss? Time for a software update...

Last summer, I gave a lift to a couple of friends to a place I was unfamiliar with. So I used a SatNav to help with the navigation. It was, of course, completely socially unaware. It interrupted our conversation repeatedly, without any consideration for when it is and is not appropriate to interrupt. No waiting for pauses in the conversation. No sensitivity to the importance of the message it was imparting. No apology. Standard SatNav behaviour. And indeed it’s not obvious how one would design it any other way. We turned off the sound and relied solely on the visual guidance after a while.

More recently, a colleague started up his computer near the end of a meeting, and it went into a cycle of displays: don’t turn me off; downloading one of thirty three. I took a record of the beginning of this interaction, but gave up and left way before the downloading had finished.
It might have been fine to pull the plug on the downloading (who knows?) but it wasn’t going to be a graceful exit. The technology seemed to be saying: “You’ve got to wait for me. I am in control here.” Presumably, the design was acceptable for a desktop machine that could just be left to complete the task, but it wasn’t for a portable computer that had to be closed up to be taken from the meeting room.

I have many more examples, and I am sure that every reader does too, of situations where the design of technology is inappropriate because the technology is unaware of the social context in which it is placed, and the development team have been unwilling or unable to make the technology better fit that context.