Friday, 25 May 2012

Designing for "me"

The best designers seem to design for themselves. I just love my latest Rab jacket. I know Rab's not a woman, but he's a climber and he understands what climbers need. Most climbing equipment has been designed by climbers; in fact, I can't imagine how you would design good climbing gear without really understanding what climbers do and what they need. Designers need a dual skill set: to be great designers, and to really understand the context for which they are designing.

Shift your attention to interaction design. Bill Moggridge is recognised as a great designer, and he argues powerfully for the importance of intuition and design skill in designing good products. BUT he draws on examples where people could be designing for themselves. Designers who are also game-players can invoke intuition to design good games, for example. But judging by the design of most washing machine controls, few designers of these systems actually do the laundry! There seems to be a huge gulf between contexts where the designer is also a user, or has an intimate knowledge of the context of use, and contexts where the designer is an outsider.

It's often easy to make assumptions about other people's work, and about the nuances of their activities. You get over-simplifications that result in inappropriate design decisions. Techniques such as Contextual Inquiry are intended to help the design team understand the context of use in depth. But it's not always possible for the entire design team to immerse themselves in the context of use. Then you need some surrogates, such as rich descriptions that help the design team to imagine being there. Dourish presents a compelling argument against ethnographers having to present implications for design: he argues that it should be enough to provide a rich description of the context of use. His argument is much more sophisticated than the one I'm presenting here. Which is simply that it's impossible to reliably design for a situation you don't understand deeply. And for that, you need ways for people to become "dual experts" – in design, and in the situations for which they are designing.

Saturday, 19 May 2012

When is a user like a lemon?

Discussing the design lifecycle with one of my PhD students, I found myself referring back to Don Norman's book on emotional design – in particular, to the cover picture of a Philippe Starck lemon squeezer. The evaluation criteria for a lemon squeezer are, I would guess, that it can be used to squeeze lemons (for which it probably needs to be tested with some lemons), that it can be washed, that it will not corrode or break quickly, and that (in this case, at least) it looks beautiful.

These evaluation criteria can be addressed relatively rapidly during the design lifecycle. You don't need to suspend the design process for a significant length of time to go and find a representative sample of lemons on which to test a prototype squeezer. You don't need to plan a complex lemon-squeezing study with a carefully devised set of lemon-squeezing tasks. There's just one main task for the squeezer to perform, and the variability in lemons is mercifully low.

In contrast, most interactive computer systems support a plethora of tasks, and are intended for use by a wide variety of people, so requirements gathering and user testing have to be planned as separate activities in the design of interactive systems. Yet even in the 21st century, this doesn't seem to be fully recognised. As we found in a study a few years ago, agile software development processes don't typically build in time for substantive user engagement (other than by involving a few user representatives in the development team). And when you come to the standards and regulations for medical devices, they barely differentiate between latex gloves and glucometers or interactive devices in intensive care. Users of interactive systems are apparently regarded as being as uniform and controllable as lemons: define what they should do, and they will do it. In our dreams! (Or maybe our nightmares...)

Monday, 7 May 2012

Usable security and the total customer experience

Last week, I had a problem with my online Santander account. This isn't particularly about that company, but a reflection on a multi-channel interactive experience and the nature of evidence. When I phoned to sort out the problem, I was asked a series of security questions that were essentially "trivia" questions about the account that could only be answered accurately by being logged in at the time. I'd been expecting a different kind of security question (mother's maiden name and the like), so didn't have the required details to hand. Every question I couldn't answer made my security rating worse, and quite quickly I was being referred to the fraud department. Except that they would only ring me back within 6 hours, at their convenience, not mine. I never did receive that call because I couldn't stay in for that long. The account got blocked, so now I couldn't get the answers to the security trivia questions even though I knew that would be needed to establish my identity. Total impasse.

After a couple more chicken-and-egg phone calls, I gathered up all the evidence I could muster to prove my identity and went to a branch to resolve the problem face-to-face. I was assured all was fine, and that they had put a note on my account to confirm that I had established my credentials. But I got home and the account was still blocked. So yet another chicken-and-egg phone call, another failed trivia test. Someone would call me back about it. Again, they called when I was out. Their refusal to adapt to the customer's context and constraints was costing them time and money, just as it was costing me time and stress.

I have learned a lot from the experience; for example, enter these conversations with every possible factoid of information at your fingertips; expect to be treated like a fraudster rather than a customer... The telephone interaction with a human being is not necessarily any more flexible than the interaction with an online system; the customer still has to conform to an interaction style determined by the organisation.

Of course, the nature of evidence is different in the digital world from the physical one, where (in this particular instance) credible photo ID is still regarded as the Gold Standard, but being able to answer account trivia seems like a pretty poor way of establishing identity. As discussed last week, evidence has to answer the question (in this case: is the caller the legitimate customer?). A trivia quiz is not usable by the average customer until they have learned to think like security people. This difference in thinking styles has been recognised for many years now (see for example "Users are not the enemy"); we talk about interactive system design being "user centred", but it is helpful if organisations can be user centred too, and this doesn't have to compromise security, if done well. I wonder how long it will take large companies to learn?

Tuesday, 1 May 2012

Seeing is believing?

In a recent interview, Mary Beard recounted a Roman joke: "A guy meets another in the street and says: 'I thought you were dead.' The bloke says: 'Can't you see I'm alive?' The first replies: 'But the person who told me you were dead is more reliable than you.'" She used the joke (apparently considered hilarious all those centuries ago) to illustrate a point about changing cultures and the nature of evidence. But the question of evidence is just as important in our work today. When are verbal reports a reliable form of evidence, and when do you need more direct forms of evidence? What can you learn from web analytics or the device log of an infusion pump? What does observing people tell you, as against interviewing them? Etc.

In general, device logs of any kind should tell you what happened, over a large number of instances, but they can't tell you anything much about the circumstances or the causes (what people thought they were doing, or what context they were in). So they give you an idea of where problems might lie, but not really what those problems are; they give quantity, but not necessarily quality.

Conversely, interviews and observations can potentially give quality, but not quantity. They have greater explanatory power; interviews are good for finding out people's perceptions (e.g. of why they behave in certain ways), and observations will give insights into the contexts within which people do things and the circumstances surrounding actions. Interviews may overlook details that people consider unremarkable, while observations may catch those details but not explain them. And of course the questions that are asked or the way an observational study is conducted will determine what data is gathered.

As I type this, most of it seems very self-evident, and yet people often seem to choose inappropriate data gathering methods that don't reliably answer the questions posed. I'll use an example from a researcher I have great respect for, and who is undeniably a leader in the field: Ever since I first read it, I have been perplexed by Jim Reason's analysis of photocopier errors – not because it is inconsistent with other studies, but because it is based entirely on retrospective self-reports. But our memories of past events are highly selective. I make errors every day, as we all do (see errordiary for both mundane and bizarre examples), but the ones I can recall later are the ones that were most embarrassing, most costly. most amusing or otherwise memorable. So what confidence can we have in retrospective reports as a way of measuring error? I don't know. And I don't think that's an admission of failure on my part; it's a recognition that retrospective self-report is an unreliable way of gathering data about human error. And that remains a challenge: to match research questions and data gathering and analysis methods appropriately.

Sunday, 22 April 2012

Making sense of health information

A couple of people have asked me why I'm interested in patients' sensemaking, and what the problem is with all the health information that's available on the web. Surely there's something for everyone there? Well maybe there is (though it doesn't seem that way), but both our studies of patients' information seeking and personal experience suggest that it's far from straightforward.

Part of the challenge is in getting the language right: finding the right words to describe a set of symptoms can be difficult, and if you get the wrong words then you'll get inappropriate information. And as others have noted, the information available on the internet tends to be biased towards more serious conditions, leading to a rash of cyberchondria. But actually, diagnosis is only a tiny part of the engagement with and use of health information. People have all sorts of questions, such as "should I be worried?" or "how can I change my lifestyle?", and much more individual and personal issues, often not focusing on a single question but on trying to understand an experience, or a situation, or how to manage a condition. For example, there may be general information on migraines available, but any individual needs to relate that generic information to their own experiences, and probably experiment with trigger factors and ways of managing their own migraine attacks, gradually building up a personal understanding over time, using both external resources and individual experiences.

The literature describes sensemaking in different ways that share many common features. Key elements are that people:
  • look for information to address recognised gaps in understanding (and there can be challenges in looking for information and in recognising relevant information when it is found).
  • store information (whether in their heads or externally) for both immediate and future reference.
  • integrate new information with their pre-existing understanding (so sensemaking never starts from a blank slate, and if pre-existing understanding is flawed then it may require a radical shift to correct that flawed understanding).
One important element that is often missing from the literature is the importance of interpretation of information: that people need to explicitly interpret information to relate to their own concerns. This is particularly true for subjects where there are professional and lay perspectives, languages and concerns for the same basic topic. Not only do professionals and lay people (clinicians and patients in this case) have different terminology; they also have different concerns, different engagement, different ways of thinking about the topic.

Sensemaking is about changing understanding, so it is highly individual. One challenge in designing any kind of resource that helps people make sense of health information is recognising the variety of audiences for information (prior knowledge, kinds of concerns, etc.) and making it easy for people to find information that is relevant to them, as an individual, right here and now. People will always need to invest effort in learning: I don't think there's any way around that (indeed, I hope there isn't!)... but patients' sensemaking seems particularly interesting because we're all patients sometimes, and because making sense of our health is important, but could surely be easier than it seems to be right now.

Sunday, 15 April 2012

The pushmepullyou of conceptual design

I've just been reading Jeff Johnson's and Austin Henderson's new book on 'conceptual models'. They say (p.18) that "A conceptual model describes how designers want users to think about the application." At first this worried me: surely the designers should be starting by understanding how users think about their activity and how the application can best support users?

Reading on, it's obvious that putting the user at the centre is important, and they include some compelling examples of this. But the question of how to develop a good conceptual model that is grounded in users' expectations and experiences is not the focus of the text: the focus is on how to go from that to an implementation. This is a very complementary approach to ours on CASSM, where we've been concerned with how to elicit and describe users' conceptual models, and then how to support them through design.

It seems to be impossible to simultaneously put both the user(s) and the technology at the centre of the discourse. In focusing on the users, CASSM is guilty of downplaying the challenges of implementation. Conversely, in focusing on implementation, Johnson and Henderson de-emphasise the challenges of eliciting users' conceptual models. These can seem, like the pushmepullyou from Dr Doolittle, to be pulling in opposite directions. But this text is a welcome reminder that conceptual models still matter in design.

Thursday, 5 April 2012

KISS: Keep It Simple, Sam!

Tony Hoare is credited with claiming that... "There are two ways of constructing a software design; one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." Of course, he is focusing on software: on whether it is easy to read or test, or whether it is impossible to read (what used to be called "spaghetti code" but probably has some other name now), and impossible to devise a comprehensive set of tests for.

When systems suffer "feature creep", where they acquire more and more features to address real or imagined user needs, it's nigh on impossible to keep the code simple, so inevitably it becomes harder to test, and harder to be confident that the testing has been comprehensive. This is a universal truth, and it's certainly the case in the design of software for infusion devices. The addition of drug libraries and dose error reduction software, and the implementation of multi-function systems to be used across a range of settings for a variety of purposes, makes it increasingly difficult to be sure that the software will perform as intended under all circumstances. There is then a trade-off between delivering a timely system, or delivering a well designed and well tested system... or delivering a system that then needs repeated software upgrades as problems are unearthed. And you can never be sure you've really found all the possible problems.

These aren't just problems for the software: they're also problems for the users. When software upgrades change the way the system performs, it's difficult for the users to predict how it will behave. Nurses don't have the mental resources to be constantly thinking about whether they're working with the infusion device that's running version 3.7 of the software or the one that's been upgraded to version 3.8, or to anticipate the effects of the different software versions, or different drug libraries, on system performance. Systems that are already complicated enough are made even more so by such variability.

Having fought with several complicated technologies recently, my experience is not that they have no obvious deficiencies, but that those deficiencies are really, really hard to articulate clearly. And if you can't even describe a problem, it's going to be very hard to fix it. Better to avoid problems in the first place: KISS!