Friday 25 May 2012

Designing for "me"

The best designers seem to design for themselves. I just love my latest Rab jacket. I know Rab's not a woman, but he's a climber and he understands what climbers need. Most climbing equipment has been designed by climbers; in fact, I can't imagine how you would design good climbing gear without really understanding what climbers do and what they need. Designers need a dual skill set: to be great designers, and to really understand the context for which they are designing.

Shift your attention to interaction design. Bill Moggridge is recognised as a great designer, and he argues powerfully for the importance of intuition and design skill in designing good products. BUT he draws on examples where people could be designing for themselves. Designers who are also game-players can invoke intuition to design good games, for example. But judging by the design of most washing machine controls, few designers of these systems actually do the laundry! There seems to be a huge gulf between contexts where the designer is also a user, or has an intimate knowledge of the context of use, and contexts where the designer is an outsider.

It's often easy to make assumptions about other people's work, and about the nuances of their activities. You get over-simplifications that result in inappropriate design decisions. Techniques such as Contextual Inquiry are intended to help the design team understand the context of use in depth. But it's not always possible for the entire design team to immerse themselves in the context of use. Then you need some surrogates, such as rich descriptions that help the design team to imagine being there. Dourish presents a compelling argument against ethnographers having to present implications for design: he argues that it should be enough to provide a rich description of the context of use. His argument is much more sophisticated than the one I'm presenting here. Which is simply that it's impossible to reliably design for a situation you don't understand deeply. And for that, you need ways for people to become "dual experts" – in design, and in the situations for which they are designing.

Saturday 19 May 2012

When is a user like a lemon?

Discussing the design lifecycle with one of my PhD students, I found myself referring back to Don Norman's book on emotional design – in particular, to the cover picture of a Philippe Starck lemon squeezer. The evaluation criteria for a lemon squeezer are, I would guess, that it can be used to squeeze lemons (for which it probably needs to be tested with some lemons), that it can be washed, that it will not corrode or break quickly, and that (in this case, at least) it looks beautiful.

These evaluation criteria can be addressed relatively rapidly during the design lifecycle. You don't need to suspend the design process for a significant length of time to go and find a representative sample of lemons on which to test a prototype squeezer. You don't need to plan a complex lemon-squeezing study with a carefully devised set of lemon-squeezing tasks. There's just one main task for the squeezer to perform, and the variability in lemons is mercifully low.

In contrast, most interactive computer systems support a plethora of tasks, and are intended for use by a wide variety of people, so requirements gathering and user testing have to be planned as separate activities in the design of interactive systems. Yet even in the 21st century, this doesn't seem to be fully recognised. As we found in a study a few years ago, agile software development processes don't typically build in time for substantive user engagement (other than by involving a few user representatives in the development team). And when you come to the standards and regulations for medical devices, they barely differentiate between latex gloves and glucometers or interactive devices in intensive care. Users of interactive systems are apparently regarded as being as uniform and controllable as lemons: define what they should do, and they will do it. In our dreams! (Or maybe our nightmares...)

Monday 7 May 2012

Usable security and the total customer experience

Last week, I had a problem with my online Santander account. This isn't particularly about that company, but a reflection on a multi-channel interactive experience and the nature of evidence. When I phoned to sort out the problem, I was asked a series of security questions that were essentially "trivia" questions about the account that could only be answered accurately by being logged in at the time. I'd been expecting a different kind of security question (mother's maiden name and the like), so didn't have the required details to hand. Every question I couldn't answer made my security rating worse, and quite quickly I was being referred to the fraud department. Except that they would only ring me back within 6 hours, at their convenience, not mine. I never did receive that call because I couldn't stay in for that long. The account got blocked, so now I couldn't get the answers to the security trivia questions even though I knew that would be needed to establish my identity. Total impasse.

After a couple more chicken-and-egg phone calls, I gathered up all the evidence I could muster to prove my identity and went to a branch to resolve the problem face-to-face. I was assured all was fine, and that they had put a note on my account to confirm that I had established my credentials. But I got home and the account was still blocked. So yet another chicken-and-egg phone call, another failed trivia test. Someone would call me back about it. Again, they called when I was out. Their refusal to adapt to the customer's context and constraints was costing them time and money, just as it was costing me time and stress.

I have learned a lot from the experience; for example, enter these conversations with every possible factoid of information at your fingertips; expect to be treated like a fraudster rather than a customer... The telephone interaction with a human being is not necessarily any more flexible than the interaction with an online system; the customer still has to conform to an interaction style determined by the organisation.

Of course, the nature of evidence is different in the digital world from the physical one, where (in this particular instance) credible photo ID is still regarded as the Gold Standard, but being able to answer account trivia seems like a pretty poor way of establishing identity. As discussed last week, evidence has to answer the question (in this case: is the caller the legitimate customer?). A trivia quiz is not usable by the average customer until they have learned to think like security people. This difference in thinking styles has been recognised for many years now (see for example "Users are not the enemy"); we talk about interactive system design being "user centred", but it is helpful if organisations can be user centred too, and this doesn't have to compromise security, if done well. I wonder how long it will take large companies to learn?

Tuesday 1 May 2012

Seeing is believing?

In a recent interview, Mary Beard recounted a Roman joke: "A guy meets another in the street and says: 'I thought you were dead.' The bloke says: 'Can't you see I'm alive?' The first replies: 'But the person who told me you were dead is more reliable than you.'" She used the joke (apparently considered hilarious all those centuries ago) to illustrate a point about changing cultures and the nature of evidence. But the question of evidence is just as important in our work today. When are verbal reports a reliable form of evidence, and when do you need more direct forms of evidence? What can you learn from web analytics or the device log of an infusion pump? What does observing people tell you, as against interviewing them? Etc.

In general, device logs of any kind should tell you what happened, over a large number of instances, but they can't tell you anything much about the circumstances or the causes (what people thought they were doing, or what context they were in). So they give you an idea of where problems might lie, but not really what those problems are; they give quantity, but not necessarily quality.

Conversely, interviews and observations can potentially give quality, but not quantity. They have greater explanatory power; interviews are good for finding out people's perceptions (e.g. of why they behave in certain ways), and observations will give insights into the contexts within which people do things and the circumstances surrounding actions. Interviews may overlook details that people consider unremarkable, while observations may catch those details but not explain them. And of course the questions that are asked or the way an observational study is conducted will determine what data is gathered.

As I type this, most of it seems very self-evident, and yet people often seem to choose inappropriate data gathering methods that don't reliably answer the questions posed. I'll use an example from a researcher I have great respect for, and who is undeniably a leader in the field: Ever since I first read it, I have been perplexed by Jim Reason's analysis of photocopier errors – not because it is inconsistent with other studies, but because it is based entirely on retrospective self-reports. But our memories of past events are highly selective. I make errors every day, as we all do (see errordiary for both mundane and bizarre examples), but the ones I can recall later are the ones that were most embarrassing, most costly. most amusing or otherwise memorable. So what confidence can we have in retrospective reports as a way of measuring error? I don't know. And I don't think that's an admission of failure on my part; it's a recognition that retrospective self-report is an unreliable way of gathering data about human error. And that remains a challenge: to match research questions and data gathering and analysis methods appropriately.