Friday 8 November 2013

That was easy: Understanding Usability and Use

For a long time (measured in years rather than days or weeks), I've been struggling with the fact that the word "usability" doesn't seem to capture the ideas that I consider to be important. Which are about how well a device actually supports a person in doing the things they want to do.

Some time ago, a colleague (apparently despairing of me) gave me a gift: a big red button that, when you press it, announces that "That was easy". Yep: easy, but also (expletive deleted) pointless.

So if someone is given an objective ("Hey, press this button!") then ease of use is important, and this button satisfies that need. Maybe the objective is expressed less directly ("Press a red button", which would require finding the red button to press, or "Do something simple", which could be interpreted in many different ways), and the role of the "easy" button isn't so obvious. Ease of use isn't the end of the story because, while it's important that it is easy to do what you want to do, it's also important that what you want to do is something that the device supports easily. In this case, there probably aren't many people who get an urge to press an "easy" button. So it's easy, but it's not useful, or rewarding (the novelty of the "easy" button wore off pretty fast).

So it doesn't just matter that a system is usable: it also matters that that system does the things that the user wants it to do. Or an appropriate subset of those things. And in a way that makes sense to the user. It matters that the system has a use, and fits the way the user wants to use it.

That use may be pure pleasure (excite, titillate, entertain), but many pleasures (such as that of pressing an "easy" button) wear off quickly. So systems need to be designed to provide longer term benefit... like really supporting people well in doing the things that matter to them – whether in work or leisure.

Designing for use means understanding use. It means understanding the ways that people think about use. In quite a lot of detail. So that use is as intuitive as possible. That doesn't mean designing for oneself, but learning about the intended users and designing for them. And no designing things that are "easy" but inappropriate!

Thursday 31 October 2013

Different ways of interacting with an information resource

I'm at a workshop on how to evaluate information retrieval systems, and we are discussing the scope of concern. What is an IR system, and is the concept still useful in the 21st Century, where people engage with information resources in many different ways? The model of information seeking in sessions for a clear purpose still holds for some interactions, but it's certainly not the dominant practice any more.

I was struck when I first used the NHS Choices site that it encourages exploration above seeking: it invites visitors to consume health information that they hadn't realised that they might be interested in. This is possible with health in a way that it might not be in some other areas because most people have some inherent interest in better understanding their own health and wellbeing. At least some of the time! Such sites encourage unplanned consumption, hopefully leading to new understanding, without having a particular curriculum to impart.

On the way here, I read a paper by Natalya Godbold in which she describes the experiences of dialysis patients. One of the points she makes is that people on dialysis exploit a wide range of information resources in managing their condition – importantly, including how they feel at the time. This takes embodied interaction into a new space (or rather, into a space in which it has been occurring for a long time without being noticed as such): the interaction with the technology affects and is informed by the experienced effects that flow (literally as well as metaphorically) through the body. And information need, acquisition, interpretation and use are seamlessly integrated as the individual monitors, makes sense of and manages their own condition. The body, as well as the world around us, is part of the ecology of information resources we work with, often without noticing.

While many such resources can't be "designed", it's surely important to recognise their presence and value when designing explicit information resources and IR systems.

Thursday 10 October 2013

Safety: the top priority?

For the past two days, I've been at the AAMI Summit on Healthcare Technology in Nonclinical Settings. The talks have all been short and to-the-point, and generally excellent. They included talks from people with first-hand experience of living with or caring for someone with a long term condition, as well as developers, researchers... but no regulators, because of the US government shutdown. For many of the participants, the more memorable talks have been the first hand accounts of living with medical devices and of the things people do and encounter. I'll change names, but the following are some examples.

Megan's partner is on oxygen therapy. The cylinders are kept in a cupboard near an air conditioning unit. One day, a technician visited to fix something on the aircon unit. As he was working, she heard a sound like a hot air balloon. She stopped him just in time: he had just ignited a blow-torch. Right next to the oxygen cylinders. Naked flames and oxygen are an explosive combination. In this case, the issue was one of ignorance: the cylinders weren't sufficiently clearly labelled for the technician to realise what they were. However, there are also accounts of people on oxygen therapy smoking; in some cases, people continued to smoke even after suffering significant burns. That's not ignorance; it's a choice they make. Apparently, the power of the cigarette is greater than safety considerations.

Fred's son has type 1 diabetes. He was being bullied at school, to a degree that he found hard to bear. He took to poking a pencil into his insulin pump to give himself an excess dose, causing hypoglycemia so that his parents would be called to take him home (or, in more serious cases, to hospital). Escaping being bullied was more important than suffering the adverse effects of hypoglycemia.

In our own studies, we have found people making tradeoffs such as these. The person with diabetes who avoids taking their glucose meter or insulin on a first date because he doesn't want the new girlfriend to know about the diabetes until they have got to know each other (as people) a bit better first. The person on home haemodialysis who chooses to dialyse on her veranda even though the dialysate doesn't work well when it is cold, so she needs to use a patio heater as well. The veranda is a much more pleasant place to be than indoors, so again she's making a tradeoff.

Patient safety is a gold standard. We have institutes and agencies for patient safety. It's incumbent on the healthcare system (clinicians, manufacturers, regulators, etc.) to minimise the risks to patients of their treatment, while recognising that risks can't be eliminated. But we also need to remember that patients are also people. And as people we don't always prioritise our own safety. We drive fast cars; we enjoy dangerous sports; we dodge traffic when crossing the road; etc. We're always making tradeoffs between safety and other values. That doesn't change just because someone's "a patient".

Tuesday 8 October 2013

Know your user: it's hard to imagine being older

I've just been reading Penelope Lively's enchanting article "So this is old age". It's an engaging read – at least for someone who sometimes thinks that they are experiencing old age! I love some of her description: for example, "the puzzling thing in old age is to find yourself as the culmination of all [your younger selves], knowing that they are you, but that you are also now this someone else". She is very articulate, and presents something like a persona for an older person. She also reminds us that it's easier for the older person to imagine what it's like to be younger than the reverse.

And yet, the older person who she portrays is importantly different from any individual (over the age of, let's say, 75) that I personally know. It is much easier to assume that we all age similarly – that people get more similar as they get older – than to get your head around all the individual differences. But as far as I can tell the converse is actually the case: variability increases as different kinds of degeneration compete against different enhanced competencies (kinds of wisdom, perceptions, appreciations).

We all have a tendency to stereotype "the other", whether they are older or younger, male or female, a nurse, teacher, cleaner or astronaut. It's much easier to design for people "like me" than to put yourself in someone else's shoes and design for them. Personas have an important role in helping to design for others, but they need to be used with sensitivity to real people. That's surely the best design: designing for people who are different from oneself in ways that empower and delight them.

Monday 7 October 2013

Cultural heritage: sense making and meaning making


Last week, I was presenting at the workshop on Supporting Users' Exploration of Digital Libraries in Malta. One of the themes that came up was the relationship between meaning making and sense making. These seem to be two literatures that have developed in parallel without either referencing the other. Sense making is studied in the broad context of purposeful work (e.g. studying intelligence analysts working with information, photocopier engineers diagnosing problems, or lawyers working on a legal matter). Meaning making is discussed largely within museum studies, where the focus is on how to support visitors in constructing meaning during their visit. Within a cultural heritage context (which was an important focus for the workshop), there is a tendency to consider both, but it is difficult to clearly articulate their relationship.

Paula Goodale suggested that it might be concerned with how personally relevant the understanding is. This is intuitively appealing. For example, when I was putting together a small family tree recently, using records available on the internet, I came across the name Anna Jones about 4 generations back, and immediately realized that that name features in our family Bible. She's "Anna Davies" on the cover, but "Anna Jones" in the family tree inside. I had not known exactly how Anna and I are related, and the act of constructing the family tree made her more real (more meaningful) to me.




The same can clearly be true for family history resources within a cultural heritage context. But does it apply more broadly in museum curation work?
Following the workshop, we visited St Paul’s Catacombs in Rabat (Malta). 

The audio guide was pretty good for helping to understand the construction of the different kinds of tombs and the ceremonies surrounding death and the commemoration of ancestors. But was this meaning making? I’d say probably not, because it remained impersonal – it has no particular personal meaning for me or my family – and also because although I was attentive and walked around and looked at things as directed, I did not actively construct new meaning beyond what the curatorial team had invested in the design of the tour. Similarly, it wasn’t sense making because I had no personal agenda to address and didn’t actively construct new understanding for myself. So – according to my understanding – sense making and meaning making both require very active participation, beyond the engagement that may be designed or intended by educationalists or curators. They can design to enhance engagement and understanding, but maybe not to deeply influence sense making or meaning making. That is much more personal.

Monday 16 September 2013

Affordance: the case of door closing

Last week, I was at (yet another) hotel. In the Ladies' (and presumably also the Gents'), the doors had door-plates on the inside, which facilitated pushing but not pulling. Within HCI, this is often referred to as the object affording a particular action. See, for example, work by Gaver and Hartson. In fact this example goes further than affording: it determines what is physically possible. In the case of doors, the assumption is that on one side you expect to pull and on the other you expect to push.

The problem was that in this case the door hinge was very simple: the door did not automatically close. So the only way to close the cubicle door was to pull on the small handle that was designed as a lock (that afforded turning but not pulling). The assumption behind having a plate on one side and a handle on the other is that there is a default position for the door, which could have been achieved if the "system" (aka the hinge) was set up to automatically close the door. But it didn't. In this case, the user has to both pull and push the door to get it to the desired positions -- and yes, privacy is valued by most of us in this situation, so most do want to be able to close the door as well as open it!

I've previously commented that we seem to be unable to design interactive devices as simple as taps; it seems that this extends even to doors... and I don't think interactions get much simpler than this.

Friday 6 September 2013

The look of the thing matters

Today, I was at a meeting. One of the speakers suggested that the details of the way information is displayed in an information visualisation doesn't matter. I beg to differ.

The food at lunchtime was partly finger-food and partly fork-food. Inevitably, I was talking with someone whilst serving myself, but my attention was drawn to the buffet when a simple expectation was violated. The forks looked like this:

 ...so I expected them to be weighty and solid. But the one I picked up felt like this:

– i.e., insubstantial and plastic. The metallic look and the form gave an appearance that didn't match reality.

I remember a similar feeling of being slightly cheated when I first received a circular letter (from a charity) where the address was printed directly onto the envelope using a handwriting-like font and with a "proper" stamp (queen's head and all that). Even though I didn't recognise the handwriting, I immediately expected a personal letter inside – maybe an invitation to a wedding or a party. But no: an invitation to make a donation to the charity. That's not exciting.

The visual appearance of such objects introduces a dissonance between expectation and fact, forcing us to shift from type 1 (fast, intuitive) thinking to type 2 (slow, deliberate) thinking. As the fork example shows, it's possible to create this kind of dissonance in the natural (non-digital) world. But it's much, much easier in the digital world to deliberately or accidentally create false expectations. I'm sure I'm not the only person to feel cheated when this happens.

Tuesday 20 August 2013

Hidden in full view: the daft things you overlook when designing and conducting studies

Several years ago, when Anne Adams and I were studying how people engaged with health information, we came up with the notion of an "information journey", with three main stages: recognising an information need; gathering information and interpreting that information. The important point (to us) in that work was highlighting the important of interpretation: the dominant view of information seeking at that time was that if people could find information then that was job done. But we found that an important role for clinicians is in helping lay people to interpret clinical information in terms of what it means for that individual – hence our focus on interpretation.

In later studies of lawyers' information work, Simon Attfield  and I realised that there were two important elements missing from the information journey as we'd formulated it: information validation and information use. When we looked back at the health data, we didn't see a lot of evidence of validation (it might have been there, but it was largely implicit, and rolled up with interpretation) but – now sensitised to it – we found lots of evidence of information use. Doh! Of course people use the information – e.g. in subsequent health management – but we simply hadn't noticed it because people didn't talk explicitly about it as "using" the information. Extend the model.

Wind forwards to today, and I'm writing a chapter for InteractionDesign.org on semi-structured qualitative studies. Don't hold your breath on this appearing: it's taking longer than I'd expected.

I've (partly) structured it according to the PRETAR framework for planning and conducting studies:
  • what's the Purpose of the study?
  • what Resources are available?
  • what Ethical considerations need to be taken into account?
  • what Techniques for data gathering?
  • how to Analyse data?
  • how to Report results?
...and, having been working with that framework for several years now, I have just realised that there's an important element missing, somewhere between resources and techniques for data gathering. What's missing is the step of taking the resources (which define what is possible) and using them to shape the detailed design of the study – e.g., in terms of interventions.

I've tended to lump the details of participant recruitment in with Resources (even though it's really part of the detailed study design), and of informed consent in with Ethics. But what about interventions such as giving people specific tasks to do for a think-aloud study? Or giving people a new device to use? Or planning the details of a semi-structured interview script? Just because a resource is available, that doesn't mean it's automatically going to be used in the study, and all those decisions – which of course get made in designing a study – precede data gathering. I don't think this means a total re-write of the chapter, but a certain amount of cutting and pasting is about to happen ...

Tuesday 13 August 2013

Wizard of Oz: the medium and the message

Last week, one of my colleagues asserted that it didn't matter how a message was communicated – that the medium and the message were independent. I raised a quizzical eyebrow. A few days previously, I'd been in Vancouver, and had visited the Museum of Anthropology. It's a delightful place: some amazing art and artefacts from many different cultures. Most of them relate to ceremony and celebration, rather than everyday life, but they give a flavour of people's cultures, beliefs and practices. And most of them are beautiful.

One object that caught my attention was a yakantakw, or "speaking through post". According to the accompanying description: "A carved figure such as this one, with its prominent, open mouth, was used during winter ceremonies. A person who held the privilege of speaking on behalf of the hosts would conceal himself behind the figure, projecting his voice forward. It was as though the ancestor himself was calling to the assembled guests." This particular speaking through post dates from 1860, predating the Wizard of Oz by about 40 years.

In HCI, we talk about "Wizard of Oz experiments" in which participants are intended to believe that they are interacting with a computer system when in fact they are interacting with a human being who is hiding behind that system. It matters that people think that they are interacting with a computer rather than another human being. The analogy with the Wizard of Oz is quite obvious. But is looks like the native people in that region beat L. Frank Baum to the idea, and we should really be calling them "Yakantakw experiments". Just as soon as soon as we Western people learn to pronounce that word.

Thursday 18 July 2013

When reasoning and action don't match: Intentionality and safety

My team have been discussing the nature of “resilient” behavior, the basic idea being that people develop strategies for anticipating and avoiding possible errors, and creating conditions that enable them to recover seamlessly from disturbances. One of the examples that is used repeatedly is leaving one’s umbrella by the door as a reminder to take it when going out in case of rain. Of course, getting wet doesn’t seriously compromise safety for most people, but let’s let that pass: its unpleasant. This presupposes that people are able to recognize vulnerabilities and identify appropriate strategies to address them. Two recent incidents have made me rethink some of the presuppositions.

On Tuesday, I met up with a friend. She had left her wallet at work. It had been such a hot day that she had taken it out of her back pocket and put it somewhere safe (which was, of course, well hidden). She recognized that she was likely to forget it, and thought of ways to remind herself: leaving a note with her car keys, for instance. But she didn’t act on this intention. So she had done the learning and reflection, but it still didn’t work for her because she didn’t follow through with action.

My partner occasionally forgets to lock the retractable roof on our car. I have never made this mistake, but wasn’t sure why until I compared his behavior with mine. It turns out he is more relaxed than I am, and waits while the roof closes before taking the next step, which is often to close the windows, take the keys out of the lock and get out of the car. I, in contrast, am impatient. I can’t wait to lock the roof as it closes, so as the roof is coming over, my arm is going up ready to lock it. So I never forget (famous last words!): the action is automatised. The important point in relation to resilience is that I didn’t develop this behavior in order to keep the car safe or secure: I developed it because I assumed that the roof needed to be secured and I wanted it to happen as quickly as possible. So it is not intentional, in terms of safety, and yet it has the effect of making the system safer.

So what keeps the system safe(r) is not necessarily what people learn or reflect on, but what they act on. This is, of course, only one aspect of the problem; when major disturbances happen, it’s almost certainly more important to consider people’s competencies and knowledge (and how they acquired them). To (approximately) quote a London Underground controller: “We’re paid for what we know, not what we do”. Ultimately, it's what people do that matters in terms of safety; sometimes that can be clearly traced to what they know and sometime it can't.


Saturday 13 July 2013

Parallel information universes

A few years ago, a raised white spot developed on my nose. It's not pretty, so I'm not going to post a picture of it. I didn't worry about it for a while; tried to do internet searching to work out what is was and whether I should do anything about it.

A search for "raised white spot on skin" suggested that "sebrrheic keratosis" was the most likely explanation. But I did an image search on that term and it was clearly wrong: wrong colour, wrong texture, wrong size...

"One should visit a doctor immediately when this signs arise": ignoring the grammatical problem in that advice, I booked an appointment with my doctor. She assured me that there is nothing to worry about -- that it is an "intradermal naevus", that there would be information about it on dermnetnz.org. Well, actually, no: information on Becker naevus (occurs mostly in men, has a dark pigment); on Sebaceous naevus (bright pink, like birth marks), Blue naevus (clue is in the colour)... and many other conditions that are all much more spectacular in appearance than a raised white spot. I find pages of information including words ending in "oma": melanoma, medulloblastoma, meningioma, carcinoma, lymphoma, fibroma. If the condition is serious, there is information out there about it. But the inconsequential? Not a lot, apparently. Contrary to my earlier belief, knowing the technical terms doesn't always unlock the desired information.

Look further. I find information on a patient site. But it's for healthcare professionals:  "This is a form of melanocytic naevus [...] The melanocytes do not impart their pigmentation to the lesion because they are located deep within the dermis, rather than at the dermo-epidermal junction (as is the case for junctional naevi/compound naevi)." I feel stupid: I have a PhD, but it's not in medicine or dermatology, and I have little idea what this means.

I eventually work out that naevus or nevi is another term for mole. I try searching for "white mole" and find general forums (as well as pictures of small furry creatures who dig). The forums describe something that sounds about right. But lacks clinical information, on causes or treatment or likely developments without treatment.

At that point, I give up. Lay people and clinicians apparently live in parallel universes when it comes to health information. All the challenges of interdisciplinary working that plague research projects also plague other interactions – at least when it comes to understanding white moles that are not cancerous and don't eat worms for breakfast.

Saturday 22 June 2013

Time management tools that work (or not)

Today, I missed a lunch with friends. Oops! What happened?

My computer died (beyond repair) a couple of months ago, so I got a new one. Rather that trying to reconstruct my previous way of working, I chose to start again "from scratch", though of course that built a lot on previous practices. One of the changes I introduced was that I separated my work and leisure diaries: work is now recorded in the University Standard Diary (aka Outlook) so that managers and administrators can access my diary as needed; leisure is recorded in Google Calendar (which is what I used to use for everything).

But in practice, there's only one of me, and I only live one life. And most of my 'appointments' are work-related. So I forgot to keep looking in the leisure diary. Hence overlooking today's lunch with friends, which had been in the diary for at least six months. Because it had been in the diary for so long it wasn't "in my head". Doh!

When I was younger, life seemed simpler: if it was Monday -Friday, 9-5 (approx) then it was work time; else it was leisure time. Except holidays. Keep two diaries, one for work and one for leisure. Easy. But the boundaries between work and leisure have blurred. Personal technologies travel to work; work technologies come home; work-time and home-time have poorly defined boundaries. It's hard to keep the plans and schedules separate. But I, like most people, don't particularly want work colleagues to know the minutiae of my personal life. Yes, the work diary allows one to mark entries as "private", but:
1) that suggests that it's a "private" work event, and
2) an entry in a "work" diary is not accessible to my family, although I'd like them to be able to refer to my home diary.

The ontology of my diary is messed up: I want work colleagues to be able to access my work diary and family to be able to access my leisure diary, but actually at the heart of things I want to be able to manage my life, which isn't neatly separated into work and leisure.

Saturday 18 May 2013

When is a medical error a crime?

I've recently had Collateral Damage recommended to me. I'm afraid I can't face reading it: just the summary is enough. Having visited Johns Hopkins, and in particular the Armstrong Institute for Patient Safety, a couple of months ago, I'm pretty confident that the terrible experience of the Walter family isn't universal, even within that one hospital, never mind nationally or internationally. And therein lies a big challenge: that there is such a wide spectrum of experiences and practices in healthcare that it's very difficult to generalise.

There are clearly challenges:
  • the demands of doing science and of providing the best quality patient care may pull in opposing directions: if we never try new things, relying on what is already known as best practice, we may not make discoveries that actually transform care.
  • if clinicians are not involved in the design of future medical technologies then how can those technologies be well designed to support clinical practice? But if clinicians are involved in their design, and have a stake in their commercial success, how can they remain objective in their assessments of clinical effectiveness?
There are no easy answers to such challenges, but clearly they are cultural and societal challenges as well as being challenges for the individual clinician. They are about what a society values and what behaviours are acceptable and/or rewarded, whether through professional recognition or financially.

I know that I have a tendency to view things positively, to argue for a learning culture rather than a blame culture. Accounts like "Collateral Damage" might force one to question that position as being naive in the extreme. For me, though, the question is: what can society and the medical establishment learn from such an account? That's not an easy question to answer. Progress in changing healthcare culture is almost imperceptibly slow: reports such as "to err is human" and "an organisation with a memory", both published over a decade ago (and the UK report now officially 'archived'), haven't had much perceptible effect. Consider, for example, the recent inquiry into failings in Mid Staffordshire.

Bob Wachter poses the question "when is a medical error a crime?". He focuses on the idea of a 'just culture': that there is a spectrum of behaviours, from the kinds of errors that anyone could make (and for which learning is a much more constructive response than blaming), through 'at risk' behaviours to 'reckless' behaviours where major risks are knowingly ignored.

The Just Culture Community notes that "an organisation's mission defines its reason for being". From a patient's perspective, a hospital's "reason for being" is to provide the best possible healthcare when needed. Problems arise when the hospital's mission is "to generate a profit", to "advance science", or any other mission that might be at odds with providing the best possible care in the short term. The same applies to individual clinicians and clinical teams within the hospital.

I find the idea of a "just culture" compelling. It is not a simple agenda, because it involves balancing learning with blame, giving a sophisticated notion of accountability. It clearly places the onus for ensuring safety at an organisational / cultural level, within which the individual works, interacts and is accountable. But it does presuppose that the different people or groups broadly agree on the mission or values of healthcare. 'Collateral Damage' forces one to question whether that assumption is correct. It is surely a call for reflection and learning: what should the mission of any healthcare provider be? How is that mission agreed on by both providers and consumers? How are values propagated across stakeholders? Etc. Assuming that patient safety is indeed valued, we all need to learn from cases such as this.

Coping with complexity in home hemodialysis

We've just had a paper published on how people who need to do hemodialysis at home manage the activity. Well done to Atish, the lead author.

People doing home hemodialysis are a small proportion of the people who need hemodialysis overall: the majority have to travel to a specialist unit for their care. Those doing home care have to take responsibility for a complex care regime. In this paper, we focus on how people use time as a resource to help with managing care. Strategies include planning to perform actions at particular times (so that time acts as a cue to perform an action); allowing extra time to deal with any problems that might arise; building in time for reflection into a plan (to minimise the risks of forgetting steps); and organising tasks to minimise the number of things that need to be thought about or done at any one time (minimising peak complexity). There is a tendency to think about complex activities in terms of task sequences, and to ignore the details of the time frame in which people carry out tasks, and how time (and our experience of time) can be used as a resource as well as, conversely, placing demands on us (e.g. through deadlines).

This study focused on particular (complex and safety-critical) activity that has to be performed repeatedly (every day or two) by people who may not be clinicians but who become experts in the task. We all do frequent tasks, whether that's preparing a meal or getting ready to go to work, that involve time management. There's great value in regarding time as a resource, to be used effectively, as well as it placing demands on us (not enough time...)

Sunday 12 May 2013

Engineering for HCI: Upfront effort, downstream pay-back

The end of Engineering Practice 1 (c.1980).
Once upon a time, I was a graduate trainee at an engineering company. The training was organised as three-month blocks in different areas of the company. My first three months was on the (work)shop floor. Spending hours working milling machines and lathes was a bit of shock after studying mathematics at Cambridge. You mean it is possible to use your body as well as your mind to solve problems?!?
I learned that engineering was about the art of the possible (e.g. at that time you couldn't drill holes that went around corners, though 3D printing has now changed our view of what is possible). And also about managing precision: manufacturing parts that were precise enough for purpose. Engineering was inherently physical: about solving problems by designing and delivering physical artefacts that were robust and reliable and fit for purpose. The antithesis of the "trust me, I'm an engineer" view (however much that makes me smile).

Enter "software engineering": arguably, this term was coined to give legitimacy to a certain kind of computer programming. Programming was (and often still is) something of a cottage industry: people building one-off systems that seem to work, but no-one is quite sure of how, or when they might break down. Engineering is intended to reduce the variability and improve the reliability of software systems. And deliver systems that are fit for purpose.

So what does it mean to "engineer" an interactive computer system? At the most recent IFIP Working Group 2.7/13.4 meeting, we developed a video: 'Engineering for HCI: Upfront effort, downstream pay-back'. And it was accepted for inclusion in the CHI2013 Video Showcase. Success! Preparing this short video turned out to be even more difficult than I had anticipated. There really didn't seem to be much consensus on what it means to "engineer" an interactive computer system. There is general agreement that it involves some rigour and systematicity, some use of theory and science to deliver reproducible results, but does the resulting system have to be usable, to be fit for purpose? And how would one measure that? Not really clear.

I started by saying that I once worked for an engineering company. That term is probably fairly unambiguous. But I've never heard of an "interactive systems engineering company" or an "HCI engineering company". I wonder what one of those would look like or deliver.

Saturday 27 April 2013

When I get older: the uncountable positives


Last week, I was at a presentation by John Clarkson. It was a great talk: interesting, informative, thought provoking… Part-way through it, to make a point about the need for accessible technology, he presented a set of graphs showing how human capabilities decline with age. Basically, vision, hearing, strength, dexterity, etc. peak, on average, in the 20s, and it’s downhill all the way from there. It is possible that only two measurable values increase with age: age itself and grumpiness!

So this raises the obvious question: if we peak on every important variable when we’re in our 20s, why on earth aren’t most senior roles (Chief Executive, President, etc.) held by people in their 20s? Is this because grumpiness is in fact the most important quality, or is it because older people have other qualities that make them better suited to these roles? Most people would agree that it’s the latter.

The requisite qualities are often lumped under the term “wisdom”. I’m not an expert on wisdom, but I imagine there’s a literature defining and decomposing this concept to better understand it. One thing’s for sure though: it can’t be quantified in the way that visual or auditory acuity, strength, etc. can. The things that matter most for senior roles are not easily quantified.

We run a risk, in all walks of life, of thinking that if it can’t be measured then it has no value. In research we see it repeatedly in the view that the “gold standard” for research is controlled (quantifiable) experiments, and that qualitative research is “just stories”. In healthcare, this thinking manifests itself in many ways: in measures of clinical effectiveness and other outcome measures. In HCI, it manifests itself in the weight put on efficiency: of course, efficiency has its place (and we probably all have many examples of inefficient, frustrating interfaces), but there are many cases where the less easily measured outcomes (the quality of a search, the engagement of a game) are much more important.

As vision, hearing, memory, etc. decline, I'm celebrating wisdom and valuing the unmeasurable. Even if it can sound like "just stories'.

Friday 26 April 2013

Who's the boss? Time for a software update...

Last summer, I gave a lift to a couple of friends to a place I was unfamiliar with. So I used a SatNav to help with the navigation. It was, of course, completely socially unaware. It interrupted our conversation repeatedly, without any consideration for when it is and is not appropriate to interrupt. No waiting for pauses in the conversation. No sensitivity to the importance of the message it was imparting. No apology. Standard SatNav behaviour. And indeed it’s not obvious how one would design it any other way. We turned off the sound and relied solely on the visual guidance after a while.

More recently, a colleague started up his computer near the end of a meeting, and it went into a cycle of displays: don’t turn me off; downloading one of thirty three. I took a record of the beginning of this interaction, but gave up and left way before the downloading had finished.
It might have been fine to pull the plug on the downloading (who knows?) but it wasn’t going to be a graceful exit. The technology seemed to be saying: “You’ve got to wait for me. I am in control here.” Presumably, the design was acceptable for a desktop machine that could just be left to complete the task, but it wasn’t for a portable computer that had to be closed up to be taken from the meeting room.

I have many more examples, and I am sure that every reader does too, of situations where the design of technology is inappropriate because the technology is unaware of the social context in which it is placed, and the development team have been unwilling or unable to make the technology better fit that context.

Saturday 23 March 2013

"How to avoid mistakes in surgery": a summary and commentary

I've just returned from the US, and my one "must see" catch-up TV programme was "How to avoid mistakes in surgery" (now available on youtube). It's great to see human error in healthcare getting such prominent billing, and being dealt with in such an informative way. This is a very quick synopsis (of the parts I particularly noted).

The programme uses the case of Elaine Bromiley as the starting point and motivation for being concerned about human error in healthcare. The narrator, Kevin Fong, draws on experience from other domains including aviation, firefighting and formula one pit-stops to propose ways to make surgery and anaesthesia safer. Themes that emerge include:
  • the importance of training, and the value of simulation suites (simlabs) for setting up challenging scenarios for practice. This is consistent with the literature on naturalistic decision making, though the programme focuses particularly on the importance of situational awareness (seeing the bigger picture).
  • the value of checklists for ensuring that basic safety checks have been completed. This is based on the work of Atul Gawande, and is gaining recognition in UK hospitals. It is claimed that checklists help to change power relationships, particularly in the operating theatre. I don't know whether there is evidence to support this claim, but it is intuitively appealing. Certainly, it is important in operating theatres, just as it has been recognised as being important in aviation
  • the criticality of handovers from the operating theatre to the intensive care unit. This is where the learning from F1 pitstops comes in. It's about having a system and clear roles and someone who's in charge. For me, the way that much of the essential technology gets piled on the bed around the patient raised a particular question: isn't there a better way to do this?
  • dealing with extreme situations that are outside anything that has been trained for or anticipated. The example that was used for this was the Hudson River plane incident; ironically, on Thursday afternoon, about the time this programme first broadcast, Pete Doyle and I were discussing this incident as an example that isn't really that extreme, because the pilot has been explicitly trained in all the elements of the situation, though not in the particular combination of them that occurred that day. There is a spectrum of resilient behaviour, and this is an example of well executed behaviour, but it's not clear to me that it is really "extreme". The programme refers to the need to build a robust, resilient safety system. Who can disagree with this? It advocates an approach of "standardise until you have to improvise". This is true, but this could miss an important element: standardisation, done badly, reduces the professional expertise and skill of the individual, and it is essential to enhance that expertise if the individual is to be able to improvise effectively. I suspect that clinicians resist checklists precisely because it seems to reduce their professional expertise, when in fact it should be liberating them to develop their expertise at the "edges", to deal better with the extreme situations. But of course that demands that clinical professional development includes opportunities and challenges to develop that expertise. That is a challenge!
The programme finishes with a call to learn from mistakes, to have a positive attitude to errors. Captain Chesley 'Sully' Sullenberger talks about "lessons bought with blood", and about the "moral failure of forgetting these mistakes and having to re-learn them". `On the basis of our research to date, and of discussions with others in the US and Canada studying incident reporting and learning from mistakes, this remains a challenge for healthcare.

Monday 4 March 2013

Ethics and informed consent: is "informed" always best?

I am in the US, visiting some of the leading research groups studying human factors, patient safety and interactive technologies. This feels like "coming home": not in the sense that I feel more at home in the US than the UK (I don't), but in that these groups care about the same things that we do – namely, the design, deployment and use of interactive medical devices. Talking about this feels like a constant uphill struggle in the UK, where mundane devices such as infusion pumps are effectively "invisible".

One of the issues that has exercised me today is the question of whether it is always ethical to obtain informed consent from the patients who are receiving drugs via infusion devices. The group I'm working with here in Boston have IRB (Institutional Review Board, aka Ethics Board) clearance to just obtain informed consent from the lead nurse on the ward where they are studying the use of devices. Not even from all the nurses, never mind the patients. In one of our studies, we were only allowed to observe a nurse programming a device in the middle of the night if we had obtained permission to observe from the patient before they had fallen asleep (which could have been several hours earlier). Even though we were not gathering any patient data or disturbing the patient in any way. In fact, we were probably disturbing the patient more by obtaining informed consent from them than we would have been just observing the programming of the pump without their explicit knowledge.

We recently discussed the design of a planned study of possible errors with infusion devices with patient representatives. Feedback we got from one of them was: "patients and relatives need to have complete confidence in the staff and equipment, almost blind faith in many instances." There are times when ensuring that patients are fully informed is less important than giving them reassurance. The same is true for all of us when we have no control over the situation.

File:Virgatl.a340-300.g-vfar.800pix.jpgOn the flight on the way here, there was an area of turbulence where we all had to fasten our seatbelts. That's fine. What was less fine what the announcement from the pilot that we shouldn't be unduly worried about this (the implication being that we should be a little bit worried): as a passenger in seat 27F, what use was it for me to worry? No idea! It made the flight less comfortable for me, to no obvious benefit (to me or anyone else).

Similarly with patients: if we accept that studying the use of medical devices has potential long-term benefits, we also need to review how we engage patients in the study. Does obtaining informed consent give them benefits or whatever-is-the-opposite? Maybe there are times where the principle of "blind faith" should dominate.

Friday 15 February 2013

The information journey and information ecosystems

Last year, I wrote a short piece for "Designing the search experience". But I didn't write it short enough (!) so it got edited down to a much more focused piece on serendipity. Which I won't reproduce here for copyright reasons (no, I don't get any royalties!). The theme that got cut was on information ecosystems: the recognition that people are encountering and working with information resources across multiple modalities the whole time. And that well designed information resources exploit that, rather than being stand-alone material. OK, so this blog is just digital, but it draws on and refers out to other information resources when relevant!

Here is the text from the cutting room floor...

The information journey presents an abstract view of information interaction from an individual’s perspective. We first developed this framework during work studying patients’ information seeking; the most important point that emerged from that study was the need for validation and interpretation. Finding information is not enough: people also need to be able to assess the reliability of the information (validation) and relate it to their personal situation and needs (interpretation).

This need for validation and interpretation had not been central to earlier information seeking models—possibly because earlier studies had not worked with user groups (such as patients) with limited domain knowledge, nor focused on the context surrounding information seeking. But we discerned these validation and interpretation steps in all of our studies: patients, journalists, lawyers and researchers alike.

The information journey starts when an individual either identifies a need (a gap in knowledge) or encounters information that addresses a latent need or interest. Once a need has been identified, a way to address that need must be determined and acted upon, such as asking the person at the next desk, going to a library, looking “in the world,” or accessing internet resources. On the web, that typically means searching, browsing, and follow trails of “information scent”. Often finding information involves several different resources and activities. These varied sources create an information ecosystem of digital, physical and social resources.

Information encountered during this journey needs to be validated and interpreted. Validation is often a loose assessment of the credibility of the information. Sillence and colleagues highlight important stages in the process: an early and rapid assessment—based on criteria such as the website’s design and whether it appears to be an advertising site—is typically followed by a more deliberate analysis of the information content, such as assessing whether it is consistent with other sources of information.
 
Interpretation is not usually straightforward. It often involves support from information intermediaries (an important part of the information ecosystem). This is one of the important roles of domain specialists (e.g. doctors and lawyers): working with lay people to interpret the “facts” in the context of the actual, situated needs. Even without help from intermediaries, Sillence & co. describe the lay users of health information in their study as acting like scientists, generating and testing hypotheses as they encountered new information resources, both online and offline. No one information resource is sufficient: online information fits in a broader ecology of information sources which are used together, albeit informally, to establish confidence and build understanding.
 
The interpretation of information can often highlight further gaps in understanding. So one information need often leads to others. For example, a colleague of mine was recently planning to buy a Bluetooth headset. His initial assumption was that there were only a few suitable headsets on the market, and his aim was simply to identify the cheapest; but it quickly became apparent that there were hundreds of possible headsets, and that he first needed to understand more about their technical specifications and performance characteristics to choose one that suited his needs. A simple information problem had turned into a complex, multi-faceted one. A known item search had turned into an exploratory search, and the activity had turned from fact-finding to sensemaking.

Information resources surround us. We are informavores, consuming and interpreting information across a range of channels. We are participants in huge information ecosystems, and new information interaction technologies need to be designed not just to work well on their own, but to be valuable components of those ecosystems.

Thursday 14 February 2013

The importance of context (even for recognising family!)

I've been using the face recognition feature in my photograph management software. It was coming up with some suggestions that were pretty impressive (e.g. finding several additional photos that featured my mother, when primed with a few) and some that felt a little spooky (e.g. suggesting that a photo of me was actually of my mother – something that probably none of us wants to admit to, however attractive the parent). But it was also making some inexplicably bizarre suggestions – e.g. that a male colleague might be one of my daughters, or that a wine glass might be a face at all. This recognition technology is getting very sophisticated, but it clearly does not recognise faces in a human-like way!


From a computational perspective, it does not account for context: it identifies and matches features that, in some low-level way, correspond to "face", and it gets that right a lot of the time, identifying real human faces and artificial faces (such as a doll). However, it does not have the background knowledge to do the gender- and age-based reasoning that people naturally do. This makes some of its suggestions seem bizarre. And the fact that it works with low-level features of an image is really exposed when it suggests that a wineglass should be named.

From a human perspective, context also matters in recognition. For most adult faces that were close friends or relations, recognition was generally straightforward, but for children or less familiar people, it was almost impossible to recognise people out of context. The particular software I was using did not allow me to switch easily between detail and context, so there are some faces that are, and will remain, unlabelled, meaning that I won't be able to find them again easily. For example, with context, it was instantly apparent who this small child was: she was sat on her (recognisable) mother's knee, with her big sister at her side. But without that context, she is a small (and slightly uncomfortable-looking) blonde toddler. Context matters.

Thursday 17 January 2013

When context really matters: entertainment, safety ... or neither?

Yesterday, Mark Handley drew my attention to a video of the recent evacuation of an All Nippon Airways Boeing 787 due to a battery problem: "Here's a video from inside the plane:  The inflight entertainment system has clearly just rebooted, and about half the screens are displaying the message "Please Wait" in large comforting letters. Maybe not the most appropriate message when you want people to evacuate quickly!"

Fortunately, it seems that passengers ignored the message asking them to wait, and did indeed evacuate instead. But did they do so as quickly as they might have done otherwise? We'll never know. They will have had many other sources of information available at the time, of which the most powerful were probably other people's behaviour and the appearance of the evacuation slides. The digital and physical contexts were providing different cues to action.

Brad Karp observed that : "Presumably when you activate the slides, you either want to kill the entertainment system or have it display "EVACUATE!"

Alan Cooper, in "The inmates are running the asylum", discusses many examples of interaction design. One he explores is the challenge of designing good in-flight entertainment systems. For example, he points out that the computer scientist's tendency to deal with only three numbers (0, 1, infinity) is inappropriate when choosing a maximum number of films to make available on a flight, and that choosing a reasonable (finite) number makes possible attractive interaction options that don't scale well to infinity. He also argues that the entertainment system needs two different interfaces: one for the passenger and a different one for the crew who need to manage it. But if you watch the video, you will see that half the screens on the plane are showing a reboot sequence. Who designed this as an interface for passengers? If the system developers don't even think to replace a basic reboot sequence by something more engaging or informative, what chance of them thinking about the bigger picture of how the entertainment system might be situated within, and interact with, the broader avionics system?

In-flight entertainment systems don't seem to be considered as part of the safety system of the aircraft. Surely, they should be. But that requires a broader "systems" perspective when designing, to give passengers more contextually relevant information that situates the digital more appropriately within the physical context.

Happy (entertaining, safe) flying!