Saturday, 9 June 2012

Give me a little more time...

A few weeks ago, one of our PhD students, Amir Kamsin, was awarded 3rd prize in the student research competition  at CHI for his research on how we manage our time, and tools to support time management. Congratulations to Amir! The fact that it has taken until now to comment shows how difficult I am finding it to do things in a timely way. Many books and blogs (e.g. ProfSerious') have been written on how we should manage our time; it's difficult to even find the time to read them!

Some years ago, Thomas Green and I did a study of time management, and concluded that "what you get is not what you need". In that paper, we were focusing mainly on diary / calendar management and highlighted important limitations of online diaries, most of which are still true today (e.g. ways of marking meetings as provisional; including travelling time as well as meeting time; and making entries appropriately interpretable by others). In contrast, Amir is focusing on "to do" management. There are many aspects to his findings, of course. Two of them particularly resonate for me...

The first is how much of our time management is governed by emotional factors. It has long been a standing joke in my research group that you can tell when someone is avoiding doing a particular (usually big) job because they suddenly get ultra-productive on other tasks. The guilt about the big job is a great motivator! But I've become increasingly aware that there are even very small tasks that I avoid, either because I don't know where to start or because the first step is daunting. I've started to mentally label these as "little black clouds", and I'm gradually learning to prioritise them before they turn into big black clouds -- not necessarily by doing them immediately, but by committing to a time to do them. No "to-do" management systems that I'm aware of makes emotional factors explicit. Even their implementations of "importance" and "urgency" don't capture the fluidity of these ideas in practice. There's much more to managing tasks and projects than importance and urgency.

The second is how much "to do" information is tied up in email. Not just simple "hit reply" to-dos, but also complex discussions and decisions about projects. There are tools that integrate email, calendars and address books, and there are to-do management systems with or without calendars. But I really want a project management tool that integrates completely seamlessly with both my email and my calendar. And is quick and easy to learn. And requires minimal extra effort to manage. Anyone know of one?

Friday, 1 June 2012

When is a qualitative study a Grounded Theory study?

I recently came across Beki Grinter's blog posts on Grounded Theory. These make great reading.

The term has been used a lot in HCI as a "bumper sticker" for any and every qualitative analysis regardless of whether or not it follows any of the GT recipes closely, and whether or not it results in theory-building. I exaggerate slightly, but not much. As Beki says, GT is about developing theory, not just about doing a "bottom up" qualitative analysis, possibly without even having any particular questions or aims in mind.

Sometimes, the questions do change, as you discover that your initial questions or assumptions about what you might find are wrong. This has happened to us more than once. For example, we conducted a study of London Underground control rooms where the initial aim was to understand the commonalities and contrasts across different control rooms, and what effects these differences had on the work of controllers, and the ways they used the various artefacts in the environment. In practice, we found that the commonalities were much more interesting than the contrasts, and that there were several themes that emerged across all the contexts we studied. The most intriguing was discovering how much the controllers seemed to be playing with a big train set! This links in to the literature on "serious games", a literature that we hadn't even considered when we started the study (so we had to learn about it fast!).

In our experience, there's an interdependent cycle between qualitative data gathering and analysis and pre-existing theory. You start with questions, gather and analyse some data, realise your questions weren't quite right, so modify them (usually to be more interesting!), gather more data, analyse it much more deeply, realise that Theory X almost accounts for your data, see what insights relating your data to Theory X provides, gather yet more data, analyse it further... end up with either some radically new theory or a minor adaptation of Theory X. Or (as in our study of digital libraries deployment) end up using Theory X (in this case, Communities of Practice) to make sense of the situations you've studied.

Many would say that a "clean" GT doesn't draw explicitly on any existing theories, but builds theory from data. In practice, in my experience, you get a richer analysis if you do draw on other theory, but that's not an essential part of GT. The important thing is to be reflective and critical: to use theory to test and shine light on your data, but not to succumb to confirmation bias, where you only notice the data that fits the theory and ignore the rest. Theory is always there to be overturned!

Friday, 25 May 2012

Designing for "me"

The best designers seem to design for themselves. I just love my latest Rab jacket. I know Rab's not a woman, but he's a climber and he understands what climbers need. Most climbing equipment has been designed by climbers; in fact, I can't imagine how you would design good climbing gear without really understanding what climbers do and what they need. Designers need a dual skill set: to be great designers, and to really understand the context for which they are designing.

Shift your attention to interaction design. Bill Moggridge is recognised as a great designer, and he argues powerfully for the importance of intuition and design skill in designing good products. BUT he draws on examples where people could be designing for themselves. Designers who are also game-players can invoke intuition to design good games, for example. But judging by the design of most washing machine controls, few designers of these systems actually do the laundry! There seems to be a huge gulf between contexts where the designer is also a user, or has an intimate knowledge of the context of use, and contexts where the designer is an outsider.

It's often easy to make assumptions about other people's work, and about the nuances of their activities. You get over-simplifications that result in inappropriate design decisions. Techniques such as Contextual Inquiry are intended to help the design team understand the context of use in depth. But it's not always possible for the entire design team to immerse themselves in the context of use. Then you need some surrogates, such as rich descriptions that help the design team to imagine being there. Dourish presents a compelling argument against ethnographers having to present implications for design: he argues that it should be enough to provide a rich description of the context of use. His argument is much more sophisticated than the one I'm presenting here. Which is simply that it's impossible to reliably design for a situation you don't understand deeply. And for that, you need ways for people to become "dual experts" – in design, and in the situations for which they are designing.

Saturday, 19 May 2012

When is a user like a lemon?

Discussing the design lifecycle with one of my PhD students, I found myself referring back to Don Norman's book on emotional design – in particular, to the cover picture of a Philippe Starck lemon squeezer. The evaluation criteria for a lemon squeezer are, I would guess, that it can be used to squeeze lemons (for which it probably needs to be tested with some lemons), that it can be washed, that it will not corrode or break quickly, and that (in this case, at least) it looks beautiful.

These evaluation criteria can be addressed relatively rapidly during the design lifecycle. You don't need to suspend the design process for a significant length of time to go and find a representative sample of lemons on which to test a prototype squeezer. You don't need to plan a complex lemon-squeezing study with a carefully devised set of lemon-squeezing tasks. There's just one main task for the squeezer to perform, and the variability in lemons is mercifully low.

In contrast, most interactive computer systems support a plethora of tasks, and are intended for use by a wide variety of people, so requirements gathering and user testing have to be planned as separate activities in the design of interactive systems. Yet even in the 21st century, this doesn't seem to be fully recognised. As we found in a study a few years ago, agile software development processes don't typically build in time for substantive user engagement (other than by involving a few user representatives in the development team). And when you come to the standards and regulations for medical devices, they barely differentiate between latex gloves and glucometers or interactive devices in intensive care. Users of interactive systems are apparently regarded as being as uniform and controllable as lemons: define what they should do, and they will do it. In our dreams! (Or maybe our nightmares...)

Monday, 7 May 2012

Usable security and the total customer experience

Last week, I had a problem with my online Santander account. This isn't particularly about that company, but a reflection on a multi-channel interactive experience and the nature of evidence. When I phoned to sort out the problem, I was asked a series of security questions that were essentially "trivia" questions about the account that could only be answered accurately by being logged in at the time. I'd been expecting a different kind of security question (mother's maiden name and the like), so didn't have the required details to hand. Every question I couldn't answer made my security rating worse, and quite quickly I was being referred to the fraud department. Except that they would only ring me back within 6 hours, at their convenience, not mine. I never did receive that call because I couldn't stay in for that long. The account got blocked, so now I couldn't get the answers to the security trivia questions even though I knew that would be needed to establish my identity. Total impasse.

After a couple more chicken-and-egg phone calls, I gathered up all the evidence I could muster to prove my identity and went to a branch to resolve the problem face-to-face. I was assured all was fine, and that they had put a note on my account to confirm that I had established my credentials. But I got home and the account was still blocked. So yet another chicken-and-egg phone call, another failed trivia test. Someone would call me back about it. Again, they called when I was out. Their refusal to adapt to the customer's context and constraints was costing them time and money, just as it was costing me time and stress.

I have learned a lot from the experience; for example, enter these conversations with every possible factoid of information at your fingertips; expect to be treated like a fraudster rather than a customer... The telephone interaction with a human being is not necessarily any more flexible than the interaction with an online system; the customer still has to conform to an interaction style determined by the organisation.

Of course, the nature of evidence is different in the digital world from the physical one, where (in this particular instance) credible photo ID is still regarded as the Gold Standard, but being able to answer account trivia seems like a pretty poor way of establishing identity. As discussed last week, evidence has to answer the question (in this case: is the caller the legitimate customer?). A trivia quiz is not usable by the average customer until they have learned to think like security people. This difference in thinking styles has been recognised for many years now (see for example "Users are not the enemy"); we talk about interactive system design being "user centred", but it is helpful if organisations can be user centred too, and this doesn't have to compromise security, if done well. I wonder how long it will take large companies to learn?

Tuesday, 1 May 2012

Seeing is believing?

In a recent interview, Mary Beard recounted a Roman joke: "A guy meets another in the street and says: 'I thought you were dead.' The bloke says: 'Can't you see I'm alive?' The first replies: 'But the person who told me you were dead is more reliable than you.'" She used the joke (apparently considered hilarious all those centuries ago) to illustrate a point about changing cultures and the nature of evidence. But the question of evidence is just as important in our work today. When are verbal reports a reliable form of evidence, and when do you need more direct forms of evidence? What can you learn from web analytics or the device log of an infusion pump? What does observing people tell you, as against interviewing them? Etc.

In general, device logs of any kind should tell you what happened, over a large number of instances, but they can't tell you anything much about the circumstances or the causes (what people thought they were doing, or what context they were in). So they give you an idea of where problems might lie, but not really what those problems are; they give quantity, but not necessarily quality.

Conversely, interviews and observations can potentially give quality, but not quantity. They have greater explanatory power; interviews are good for finding out people's perceptions (e.g. of why they behave in certain ways), and observations will give insights into the contexts within which people do things and the circumstances surrounding actions. Interviews may overlook details that people consider unremarkable, while observations may catch those details but not explain them. And of course the questions that are asked or the way an observational study is conducted will determine what data is gathered.

As I type this, most of it seems very self-evident, and yet people often seem to choose inappropriate data gathering methods that don't reliably answer the questions posed. I'll use an example from a researcher I have great respect for, and who is undeniably a leader in the field: Ever since I first read it, I have been perplexed by Jim Reason's analysis of photocopier errors – not because it is inconsistent with other studies, but because it is based entirely on retrospective self-reports. But our memories of past events are highly selective. I make errors every day, as we all do (see errordiary for both mundane and bizarre examples), but the ones I can recall later are the ones that were most embarrassing, most costly. most amusing or otherwise memorable. So what confidence can we have in retrospective reports as a way of measuring error? I don't know. And I don't think that's an admission of failure on my part; it's a recognition that retrospective self-report is an unreliable way of gathering data about human error. And that remains a challenge: to match research questions and data gathering and analysis methods appropriately.

Sunday, 22 April 2012

Making sense of health information

A couple of people have asked me why I'm interested in patients' sensemaking, and what the problem is with all the health information that's available on the web. Surely there's something for everyone there? Well maybe there is (though it doesn't seem that way), but both our studies of patients' information seeking and personal experience suggest that it's far from straightforward.

Part of the challenge is in getting the language right: finding the right words to describe a set of symptoms can be difficult, and if you get the wrong words then you'll get inappropriate information. And as others have noted, the information available on the internet tends to be biased towards more serious conditions, leading to a rash of cyberchondria. But actually, diagnosis is only a tiny part of the engagement with and use of health information. People have all sorts of questions, such as "should I be worried?" or "how can I change my lifestyle?", and much more individual and personal issues, often not focusing on a single question but on trying to understand an experience, or a situation, or how to manage a condition. For example, there may be general information on migraines available, but any individual needs to relate that generic information to their own experiences, and probably experiment with trigger factors and ways of managing their own migraine attacks, gradually building up a personal understanding over time, using both external resources and individual experiences.

The literature describes sensemaking in different ways that share many common features. Key elements are that people:
  • look for information to address recognised gaps in understanding (and there can be challenges in looking for information and in recognising relevant information when it is found).
  • store information (whether in their heads or externally) for both immediate and future reference.
  • integrate new information with their pre-existing understanding (so sensemaking never starts from a blank slate, and if pre-existing understanding is flawed then it may require a radical shift to correct that flawed understanding).
One important element that is often missing from the literature is the importance of interpretation of information: that people need to explicitly interpret information to relate to their own concerns. This is particularly true for subjects where there are professional and lay perspectives, languages and concerns for the same basic topic. Not only do professionals and lay people (clinicians and patients in this case) have different terminology; they also have different concerns, different engagement, different ways of thinking about the topic.

Sensemaking is about changing understanding, so it is highly individual. One challenge in designing any kind of resource that helps people make sense of health information is recognising the variety of audiences for information (prior knowledge, kinds of concerns, etc.) and making it easy for people to find information that is relevant to them, as an individual, right here and now. People will always need to invest effort in learning: I don't think there's any way around that (indeed, I hope there isn't!)... but patients' sensemaking seems particularly interesting because we're all patients sometimes, and because making sense of our health is important, but could surely be easier than it seems to be right now.