Wednesday 26 December 2012

Second-hand serendipity?

Doing research on serendipity enables me to reflect more than I would have done otherwise on experiences that I'd class as serendipity. Preparing for a recent workshop, I realised that it was a serendipitous encounter that led to all our work on serendipity, and transformed the careers of at least two members of my research team...

DSVIS 2004 was held at Tremsbuttel Castle in Germany. People from Lexis Nexis UK participated (i.e. the company paid for them to get out of the office and attend an academic conference that was frankly quite tangential to their core business). Over a beer, I mentioned that one of my post-docs had done his PhD on journalists' information seeking, and that Nexis had been an important product for them. The findings about how journalists used information (and particularly Nexis) was interesting to them, so they commissioned us to run a workshop for their staff on journalists' information seeking. This was followed by further consultancy projects on lawyers' information seeking, and collaboration on a research project on "making sense of information" (MaSI). These projects led to new Lexis Nexis products that are still going from strength to strength. All because Lexis Nexis supported their staff to go to a workshop in Germany in 2004 and we met there.

That same meeting enabled me to develop information interaction and sensemaking work that was foundational to the SerenA project studying serendipity. It also provided lots of opportunities for at least two members of my research team to study legal information seeking. So that one meeting, all starting with a beer (!), has been of immense value, to both us and Lexis Nexis. I suspect that my team have never realised quite how much all of our careers owe to that one serendipitous connection that they weren't even a direct part of!

Wednesday 19 December 2012

When is "Okay" not Okay?

Twenty (or more) years ago, I worked with a software development kit that demanded that I click "OK" every time it crashed (which was at least once a day). I wanted a "Not OK" button  – not because it would have a different outcome, but because it better expressed what I was feeling at the time.

Now I find that Facebook puts the same socially inappropriate demand on the user:

This dialogue box uses socially appropriate terms such as "Sorry" and "Please", but I want to say "I've noted the problem and what to do about it", not "Okay". The software developer presumably regards the requirement to click "Okay" as as simple acknowledgement that "something went wrong". And at one level that is all that can be said: no amount of ranting will change the system. But "Okay" usually means something stronger: that I accept the behaviour, and don't mind if it happens again. It presupposes that the individual has choice: to accept or reject the behaviour. And implicitly that the other agent will take note of the response and act accordingly in future. In this case, of course, there is no such learning, no such evolving relationship between user and system. It's a pseudo-dialogue, and actually it is not "okay" at all.

Friday 7 December 2012

Hidden in plain sight

Last weekend, I was showing a visiting colleague around the Wellcome Collection. As he stopped to take a photograph with his iPhone, I noticed that he unlocked his phone first, then flicked through several screens to locate the camera app, selected it, and took the snap. I quickly took out my own iPhone and showed him how to access the camera function immediately by sliding the camera icon on the "lock" screen up. He was amazed: a mix of delighted and appalled. He considers himself to be a "power user" but had never noticed the icon nor discovered its purpose.

I had noticed the camera image a few months ago, following an operating system upgrade, but I also had not discovered its purpose unaided, having assumed that it was some kind of information rather than a functional slider that provided a useful short-cut. I had to be shown the use by someone else who had already discovered it. Doh!

Once discovered, the feature is quite obvious. But it is not as easily discoverable as it might be: there is no immediately presented information about key operating system changes, and few people search for features they have no reason to expect to find. Children may explore objects just to see what happens; many adults lose this. Just putting something on the screen does not guarantee that it will be noticed or appropriately interpreted.

Social interactions are so often a powerful means for learning about the world and the less obvious affordances of systems.

Tuesday 13 November 2012

It was a dark and stormy night... accounting for the physical when designing the digital

Yesterday, I used the London cycle hire scheme for the first time. I had checked all the instructions on how to hire a bike online before heading off to the nearest cycle station, all prepared with my cycle helmet and my payment card. For various reasons, it was dark and drizzling by the time I got there. The cycle station display was well illuminated, so I could go through the early stages of the transaction without difficulty, but then it came to inserting the payment card. Ah. No illumination. No nearby streetlight to improve visibility. I found myself feeling all over the front of the machine to locate the slot… which turned out to be angled upwards rather than being horizontal like most payment card slots. I eventually managed to orient the card correctly in the dark and get it into the reader.

Several steps of interaction later, the display informed me that the transaction had been successful, and that my cycle release code was being printed. Nothing happened. Apparently, the machine had run out of paper. Without paper, there is no release code, and so no way of actually getting a cycle from the racks.

To cut a long story short, it took over 30 minutes, and inserting my payment card into four different cycle station machines distributed around Bloomsbury, before I finally got a printed release code and could take a bicycle for a spin. By then it was too late to embark on the planned errand, but at least I got a cycle ride in the rain...

The developers have clearly thought through the design well in many ways. But subtleties of the ways the physical and the digital work together have been overlooked. Why is there no illumination (whether from street lighting or built into the cycle station) for the payment card slot or the printout slot? Why is there apparently no mechanism for the machine to detect that it is out of paper before the aspiring cyclist starts the interaction? Or to display the release code on-screen to make the system more resilient to paper failure? Such nuanced aspects of the situated use of the technology in practice (in the dark and the rain) have clearly been overlooked. It should be a universal design heuristic: if you have a technology that may be used outdoors, check that it all works when it's cold, dark and damp. Especially in cold, dark damp cities.

Thursday 1 November 2012

If we can't even design taps...

Today, I got a wet arm: the tap control was immediately behind the faucet, so I reached through the line of fire to turn it on, and the inevitable happened. But it looks Well Designed:

I thought I had already encountered every possible type of poor design: the tap that is unpredictable because there is only one control to govern both temperature and flow rate:
The tap that needs the explicit notice to tell the user how to make it work:
The taps where it's almost impossible to tell whether the water will flow from the shower head or the main tap:
The tap that looks as if you should turn it, when actually that controls the temperature, not the flow; for that, you have to pull the control towards you:

Yvonne Rogers told me of a tap that would only work if you were not wearing black....

The user of a tap wants to control two parameters: the temperature and the flow rate. There are plenty of designs around that enable people to do this without any faff at all. But these are apparently not interesting or exciting or aesthetically pleasing enough. So innocent users get frozen, scalded, bemused or unexpectedly wet as tap designers devise ever more innovative taps. If we can't even get tap design right, what hope for more complex interactive technologies, I ask myself...
 

Tuesday 23 October 2012

Information detours

Recently, I did an online transaction. It started out superficially simple: to buy rail tickets from London to Salford. But then I had to check on a map of Salford to find out which station was appropriate. And the train operator wanted to know my loyalty card number, so I had to go and get that from my purse. Then my credit card supplier wanted me to add in additional security information, which of course I don't remember, so I had work to reconstruct what it might be. A superficially simple task had turned into a complicated one with lots of subtasks that comprise "invisible work".

It's a repeating pattern: that information tasks that are, at first sight, simple turn out to involve lots of detours like this, and sometimes the detours are longer than the original task.

Occasionally the detours are predictable; for example, I know that to complete my tax return I'm going to have to dig out a year's worth of records of income and expenditure that are filed in different places (some physical, some digital). There aren't actually a large number of relevant records, but I still dread this data collation task, which is why the relatively simple task of completing the form always gets put off until the last minute.

It's both hard to keep track of where one is amongst all these information detours and hard to keep focused on the main task through all the detours and distractions of our rich information environments. I'd like a supply of digital place-keeping widgets to help with progress-tracking amongst the clutter. If they could also link seamlessly to physical information resources, that would be even better...

Saturday 6 October 2012

Hammers and LaTeX: some challenges of interdisciplinary working

I am editing a document in LaTeX. I am just about capable of doing this, but I'm finding it a real strain. LaTeX is very familiar to many computer scientists (particularly those who need to include formal notations in their writing), but is not my document production tool of choice. For me, it is an unwieldy tool, and I am distracted from what I want to say by what I perceive as a clunky interface.

Martin Heidegger used the analogy of a hammer: when a hammer is well designed and being used correctly then it becomes an extension of the arm: it is effectively invisible. When it is too heavy for the user, or the centre of gravity is in the wrong place, or when the user hits their thumb with it, then the hammer becomes the focus of attention rather than the task at hand. The hammer is no longer invisible, but disruptive. For me, LaTeX is a disruptive tool: I'll get there in the end, but the tool is distracting me from the task.
So why am I using it? Because the people I'm working with are more comfortable with LaTeX than with WYSIWYG (What You See Is What You Get) word processors. For them, LaTeX is the invisible tool and they find it much more powerful than (for example) MSWord. So we have a very low-level, apparently trivial, barrier to interdisciplinary working: each of us finds it challenging to use the tools that the other finds most usable and useful.

The tools are just the start of the challenge: interdisciplinary working involves learning each other's language, respecting each other's culture and value system, learning how to communicate effectively and write in ways that "make sense" to the other. In CHI+MED, technologists and social scientists are working with clinicians, and we often find mismatches in understanding (e.g. some of us find error interesting, and a problem to be exposed and addressed, while others find even the suggestion that clinicians might ever make mistakes deeply threatening).  In SerenA, scientists are working with artists, and again there are differences in values, e.g. between productivity and creativity.

There is a big push towards interdisciplinary working in research, and this is really important. For example, "problem solvers" (computer scientists and technologists) who deliver innovative systems need to be able to communicate effectively with "problem owners" (medics, lawyers, journalists, and other knowledge workers) so that next-generation systems achieve their potential. It's also an exciting journey: we have so much to learn from each other! But we shouldn't underestimate the variety and magnitude of the challenges faced in interdisciplinary work. Now, back to that LaTeX...

Wednesday 3 October 2012

The ethics of ethics

Is ethics about doing good or about avoiding doing harm?

Last week, I was chatting with a Danish colleague, who is running user studies in several European countries as part of the MONARCA project. They are developing and testing novel technologies for the detection and management of bipolar disorder. Apparently, it took 15 months to obtain ethical clearance to conduct studies in one country. 15 months!

Processes were faster in other countries, but still measured in months rather than weeks. In the UK, our experience is that the time taken to get approval to conduct user studies, even of established technologies, is highly variable, and unpredictable. But always measured in months rather than days or weeks. So it is impossible to plan a research project in detail before ethical clearance and R&D approval have been obtained. But this is a high-effort process, so you don't want to embark on it until you're sure the study will be going ahead.

The challenge of getting ethical clearance can be a real disincentive to proposing research projects on healthcare technologies. Why embark on projects in healthcare when you can do equally interesting projects in other areas that don't put such barriers in the way?

There have been some very welcome improvements over the past couple of years, with more streamlined processes for audit studies and proportionate review. But the focus is still on avoiding harm regardless of potential benefits. “VIP” is a useful mnemonic for the main concerns:

  • Vulnerability: particular care needs to be taken when recruiting participants from groups that might be regarded as vulnerable, such as children, the elderly, or people with a particular condition (illness, addiction, etc.)
  • Informed consent: participants should be informed of the purpose of the study, and of their right to withdraw at any time.
  • Privacy and confidentiality should be respected.
However, our work with clinicians and patients has really brought home to me that ethics goes beyond VIP. It should be about doing good, not just avoiding doing harm. But "doing good" might be in the long term: understanding current design and user experiences to guide the design of future technologies. And that doesn't directly address the need to engage positively with research participants. What motivates an individual clinician, patient or carer to engage with research on the design and use of medical technologies? There should be some positive benefit to participants.

For some, it will be about the "common good": about being prepared to invest time and expertise for long-term benefits. For others, there's an indirect pay-back in terms of having expertise and experience recognised and valued, or of being listened to, or having a chance to reflect on their condition, or their use of technology. There are probably many other complex motivations for participating in research. As researchers, we need to better understand those motivations, and respect them and work with them.

Why do clinician and patients engage in research on healthcare technologies? Because the perceive value – whether personal or societal – outweighs the perceived costs. Why do researchers engage in research on healthcare technologies? Ditto. The costs to all parties need to be proportionate to the benefits. So the ethics processes need to be proportionate, to encourage essential research. And as researchers we need to be mindful of the benefits, as well as the costs, to participants in research.

Wednesday 19 September 2012

Encountering information: serendipity or overload?

After my keynote at ISIC, one of the participants challenged me on my claim that information overload is a "bad thing" (not that I put it quite like that, but I certainly suggested it was something to be avoided). I framed it as a challenge when trying to design to support serendipity. We had an extended discussion about this later that day.

What Eva made me realise (thanks, Eva!) is that encountering exactly the same information can be regarded positively or negatively depending on the circumstances and the attitude of mind. If the attitude is one of exploring and of opportunity then the experience is typically positive. Eva consumes information enthusiastically on a wide variety of topics, and rarely if ever feels overloaded by the sheer volume of information available.

Whether or not information encountering is regarded as serendipitous is another question. A while ago, I gave a PechaKucha talk on the SerenA project; in the talk, I gave an example that I argue was serendipity: I encountered information that was unexpected, where I made a connection between my ambitions and an opportunity that was presenting itself, and from which the outcome was valuable. I also described the "sandpit" process that initiated SerenA – i.e., putting a bunch of academics together in a space that was conducive to ideas generation. Arguably, this experience was positive and creative, but not serendipitous, because it was designed to lead to positive outcomes. So although we could not have predicted the form of the outcome, we expected there to be an unanticipated outcome. So it wasn't serendipitous. Based on our empirical studies of serendipitous experiences, we have developed a process model of serendipity, namely that "a new connection is made that involves a mix of unexpectedness and insight and has the potential to lead to a valuable outcome. Projections are made on the potential value of the outcome and actions are taken to exploit the connection, leading to an (unanticipated) valuable outcome." From this, we also developed a classification framework
based on different mixes of unexpectedness, insight and value that define a “serendipity space” encompassing different “strengths” of serendipity.

So where does information overload fit? Well, as a busy academic, typical of many busy people, new information (however valuable) often represents new obligations:
  •  to assimilate the information,
  •  to assess its value, and
  •  to act on it. 
I recognise the potential value of opportunities, and feel frustrated by my lack of capacity to exploit them all. And because of limited capacity, every opportunity taken means other opportunities that have to be passed over. In addition, limited memory means that even assimilating all the information I "should" know represents a substantial obligation that I can't hope to fulfill. So I feel under constant threat of information overload. And that seriously inhibits my openness to serendipitous encounters.

As recounted in the PechaKucha talk: twenty-something years ago, when my children were 2 years and 3 months old respectively, I came across an advert for a PhD studentship. It was my "dream" studentship, on an exciting topic and in the perfect location for me. Doing a PhD was not in my plans at the time, but was too good an opportunity to miss. And the outcome has been fantastic. It was unquestionably a serendipitous encounter. Apart from the unintended consequence that I now feel constantly under threat of total information overload!

Wednesday 12 September 2012

The Hillsborough report 23 years on

I'm listening right now to the news report on the review of the Hillsborough disaster from 23 years. ago. I have heard terms including "betrayed", "dreadful mistakes were made", "lies" and "shift blame" (all BBC News at Ten). There is talk of "cover up", and people not admitting to mistakes made.

Families of the victims seem to be saying that they were never looking for compensation but that they wanted to be heard, and they want to know the truth. Being heard seems to be so important; if we do not hear then we do not learn; if we do not learn then we cannot change practices for the better. Maybe for some compensation is important, but for many others all that matters is that the tragedy should not have been in vain.

Earlier today, in a different context, a colleague was arguing that we need people to be "accountable" for their actions and decisions, that people need to be punished for mistakes. But we all make mistakes, repeatedly and often amusingly; for example, this evening, I phoned one daughter thinking I was phoning the other one, and because I was so sure I knew who I was talking to, and because we have a lot of "common ground", it took us both a while to realise my error. We could both laugh about it. Errordiary documents lots of equally amusing mistakes. But occasionally, mistakes have unfortunate consequences. Hillsborough is a stark reminder of this. Does unfortunate consequences automatically mean that the people who made mistakes should be punished for them? Surely covering up mistakes is even more serious than making errors in the first place. How much could we have learned (and how much easier would it have been for families to have recovered) if those responsible had not covered up and avoided being accountable? Here, I want to use the term "accountable" in a much more positive sense, meaning that they were able to account for the decisions that they made, based on the information and goals that they had at the time.

Being accountable currently seems to be about assigning blame; maybe this is sometimes appropriate – particularly if the individual or organisation in question has not learned from previous analogous incidents. But maybe sometimes learning from mistakes is of more long term value than punishing people for them. That implies a different understanding of "accountable". We need to find a better balance between blame and learning. Unless I am much mistaken.

Friday 7 September 2012

Patients' perceptions of infusion devices

Having recently had two friends-and-relations in hospitals on infusion pumps (and one on a syringe driver too), I have become even more aware of the need to take patients' experiences into account when thinking about the design of devices. To the best of my knowledge, there have been no situated studies of patients' perceptions of infusion devices. I should emphasise that this is not a formal study: just an account of two articulate people's experiences of having glucose, saline and insulin administered via infusion devices.

Alf (not his real name) felt imprisoned in his bed by the fact that the devices were plugged in to the wall. He hated being confined to bed, and would have been perfectly capable of making it to the bathroom if he hadn't felt attached to the wall. He didn't like to ask the staff whether the devices could run on battery for a while so that he could move around.

This contrasts with stories that others have told us: of patients being seen out with their infusion devices having a smoke outside the hospital, chatting up a fellow patient in the sunshine, and even going to Tesco's to do some shopping with drip stand in tow. I suspect this reflects people's amount of experience of receiving medication via infusion devices.

It also contrasts with some of our observations in situated studies, where we have found that devices are run on battery for extended periods of time because there are too few sockets available, or simply to allow the patient to move around more freely.

Manufacturers generally take the view that devices should remain on mains power except for very short periods, which is a position somewhere between Alf's sense of imprisonment and some other observations. As pumps get smaller and more portable, it should be possible for patients to feel less imprisoned by their devices, but this creates new challenges of improving battery life, adapting the physical form of stands to make them easier to move around with, and making sure that batteries get re-charged reliably (which depends on there being sufficient power sockets as well as good notifications of when charge is getting low).

Bert has a cannula in the crook of his elbow, and almost every time he moves his arm it sets off the occlusion alarm. He has learned to silence it, but it only stays silent for a short period and then alarms again (he hasn't worked out how to restart the pump). In a previous informal observation, we noted the same problem; because patients are not meant to touch their own device controls, nurses are understandably reluctant to tell patients how to restart them; in the previous observation, we found that knowledge of how to restart the pump was passed around the ward by the patients who had been in there for a while to the more recent arrivals. Some pumps will automatically detect that the occlusion has been cleared and restart themselves.


However, Bert hates having this happen while he's eating, and would really like to be able to suspend the infusion while he eats, then restart it again afterwards. In our observations, we have noted pump operation being suspended while the patient has a shower: nursing staff are able to achieve this effect, but the patient himself is not. Bert feels capable of taking responsibility for more of his own care than he is being permitted to, and finds that frustrating.

The one-size-fits-all approach to infusion device design, which removes both power and responsibility from the patient (who often has the time and the intelligence to take a more active role in their own care) may improve safety by reducing variability. However,  it may also reduce resilience and it definitely degrades the quality of the patient experience by concentrating
it all on busy, multi-tasking clinical staff.

Sunday 2 September 2012

Situated interaction from the system perspective: oops!

I am in Tokyo, to give a talk at Information Seeking In Context. Blogger infers that because a post is being composed in Tokyo, the author must understand Kanji. Result:

I have just experimented by pressing random buttons to enlarge the screen shot above from its default illegible size. It is quite gratifying to discover that it is still possible to compose a post, add a link, add a graphic, and maybe even publish it as intended. But believe me: it's taking a lot of effort. I am interacting with what appear to me to be squiggles (though of course those squiggles have meaning for readers of Kanji), and I can only guess the meaning from the graphical layout and positioning of the squiggles.

This is an amusing illustration of the dangers of computing technology being inappropriately "situated". The system has responded to the "place" aspect of the context while not adequately accounting for the "user" aspect. I fully accept that the physical environment presents information to me in Kanji, and that I sometimes fail to interpret it correctly. I don't expect the digital environment to put the same hurdles in my way!

Friday 31 August 2012

Inarticulate? The challenges of health information seeking

Showing impeccable timing, three people I care about have fallen ill at the same time. To make sense of what it happening to each of them, I have been doing a lot of internet searching. And it has become really clear that – as a lay person – some health information needs are much easier to satisfy than others. Paradoxically, it's the more technical ones that are easier to work with. Or more precisely: the ones for which a key technical term is provided (e.g. by a clinician).

In one case, we were told that Bert (not his real name) needed an angioplasty. I had no idea what one of those was, but a quick search on the query term "angioplasty" gave several search results that were consistent with each other, comprehensible and credible. Following up on that and related terms has meant that I now (rightly or wrongly) feel that I understand fairly well what Bert has gone through and what implications it has for the future.

In a second case, Alf (also not real name) told me that the excruciating pain he had been experiencing had been diagnosed as gallstones, and in particular a stone that had lodged in the bile duct. The treatment was a procedure (not an operation) that involved putting a tube through his nose and down into his gall bladder and removing the stone. Any search that I tried with terms such as "gallstones", "removal", "nose" led to sites about "cholecystectomy" (i.e. either laparoscopic [keyhole] or open surgery). We both knew that Alf had not had an operation. It took hours of searching with different terms to find any information that even approximately matched what Alf and I knew. Eventually, I tried terms involving "camera" and "gallstones", which led to "endoscopy". As I type, I believe that Alf had a "endoscopic retrograde cholangiopancreatography". I can't even pronounce those terms, never mind spell them. But if you know the terms then there are pretty good descriptions of what they involve that really helps the lay person to make sense of the treatment.

In the third case, Clarissa (not real name) was incredibly tired. Her doctor had dismissed it as "a virus". I've seen a virus being defined as "a condition that the doctor can't diagnose in detail but isn't worried about". But this "virus" had been around for weeks. What is happening? Well most internet searches that involve the word "fatigue" and any other symptom seem to lead to results about "cancer". That's not what you want to find. And it's not what I believe. I'm still trying to make sense of what might be affecting Clarissa. I don't have a good search term, and I can't find one.

Health is an area that affects us all. We all want to make sense of conditions that affect us and our loved ones. But there is a huge terminological gulf between lay language for describing health experiences and the technical language of professionals. If you know the technical "keys" then it's easy to find lay explanations, but the opposite is not yet true: if you only have a lay way of talking about health experiences then there's no easy way to tap in to a sophisticated health information understanding. This isn't an easy challenge; I wonder whether anyone can rise to it.

Thursday 16 August 2012

"He's got dimples!": making sense of visualisations

Laura's baby is due in 2 months, so time to get a 3D scan... and the first thing that Laura told me after the scan was that "he's got dimples!" I'm sure that if there had been any problem detected, that would have been mentioned first, but no: the most important information is that he has dimples, just like her. But for the radiographer doing the scan, it's likely that dimples came way down the list of features to look out for (after formation of the spine, whether the cord is around his neck, how large his head is...). Conversely, when her uncle looked at pictures from the scan, his main comment was about the way it looked as if there was a light shining on the baby. And I wanted to know what the strange shape between chin and elbow was (I still don't know...).

3D image of baby in womb


People look at scenes and scans in different ways, and notice different features of them. They "make sense" of the visual information in different ways. Some are concerned with syntactic features such as aspects of the image quality. Some are more concerned with the semantics: what it means (in this case, for the health of the child, or what he will look like). Yet others may be more concerned with the pragmatics: how information from the scene can inform action – this might have been the case if the scan were being used by a surgeon to guide them during a live operation.

Scanning technology has come on in leaps and bounds over recent decades: the ultrasound scan I had before Laura was born was difficult to even recognise as a baby as a still image: a naive viewer could only make sense of the whole by seeing how the parts moved together. Advances in technology have meant that what used to be difficult interpretation tasks for the human have been made much easier. And they have made more information potentially available (I didn't even know whether Laura was a boy or a girl until she was born, never mind whether or not she had dimples).


New technologies create many new possibilities – for monitoring, diagnosis, treatment, and even for joy. In this case, they've made the user's interpretation task much easier and made more information available. The scan is for well defined purposes, and the value of the visualisation is that it takes a large volume of data and presents it in a form that really makes sense. There is lots of information about the baby that the 3D scan does not provide, but for its intended purpose it is delightful.

Sunday 12 August 2012

The right tool for the job? Qualitative methods in HCI

It's sad to admit it, but my holiday reading has included Carla Willig's (2008) text on qualitative research methods in psychology and Jonathan Smith's (2007) edited collection on the same topic. I particularly enjoyed the chapters by Smith on Interpretive Phenomenological Analysis and by Kathy Charmaz on Grounded Theory in the edited collection. One striking feature of both books is that they have a narrative structure of "here's a method; here are its foundations; this is what it's good for; this is how to apply it". In other words, both seem to take the view that one becomes an expert in using a particular method, then builds a career by defining problems that are amenable to that method.

One of the features of Human–Computer Interaction (HCI) as a discipline is that it is not (with a few notable exceptions) fixated on what methods to apply. It is much more concerned with choosing the right tools for the job at hand, namely some aspect of the design or evaluation of interactive systems that enhance the user experience, productivity, safety or similar. So does it matter whether the method applied is "clean" Grounded Theory (in any of its variants) or "clean" IPA? I would argue not. The problem, though, is that we need better ways of planning qualitative studies in HCI, and then of describing how data was really gathered and what analysis was performed, so that we can better assess the quality, validity and scope of the reported findings.

There's a trade-off to be made between doing studies that can be done well because the method is clear and well-understood and doing studies that are important (e.g. making systems safer) but for which the method is unavoidably messy and improvisational. An important challenge for HCI (which has always adopted and adapted methods from other disciplines that have stronger methodological foundations) is to develop a better set of methods that address the important research challenges of interaction design. These aren't limited to qualitative research methods, but that is certainly one area where it's important to have a better repertoire of techniques that can be applied intelligently and accountably to address exciting problems.

Saturday 28 July 2012

Making time for serendipity

Serendipity is about time and an attitude of mind. But it's not just about the individual: it also depends on the social context. Laura Dantonio proposed a Masters project that looked at the role of social media in facilitating serendipity. Her initial focus was on how people came across unexpected, but valuable, social media content, but it quickly embraced the idea that other people are intentionally creating this content and links to it. People are investing time in making opportunities for serendipity. This comes from both sides: both creating the opportunities and exploiting them. This is a gamble: there may be little pay-off for the time invested, because there's such a chance element in serendipity. The more we look at serendipity, the clearer it becomes that design is an important contributor to this experience, but that it is more about attitude: about openness to the opportunities that life presents (and recognizing unexpected connections between ideas), and the imagination to create opportunities for others. Laura's is the first work that we're aware of that really emphasises the social angle to serendipity: that people make opportunities for others to encounter interesting information.

Wednesday 4 July 2012

An accident: lots of factors, no blame

At one level, this is a story that has been told many times already, and yet this particular rendering of it is haunting me. I don't know all the details (and never will), so parts of the following are speculation, but the story is my best understanding of what happened, and it highlights some of the challenges in trying to make sense of human error and system design.

The air ambulance made a tricky descent. Although the incident took place near a local hospital, the casualty was badly injured and needed specialist treatment, so was flown to a major trauma centre. Hopefully, he will live.

What happened? The man fell, probably about 10 metres, as he was being lowered from the top of a climbing wall. It seems that he had put his climbing harness on backwards and tied the rope on to a gear loop (which is not designed to hold much weight) rather than tying it in correctly (through the waist loop and leg loop, which were behind him). Apparently, as he let the rope take his weight to be lowered off from the climb, the gear loop gave way.

I can only guess that both the climber and his partner were new to climbing, since apparently neither of them knew how to put the harness on correctly, and also that there was no-one else on the wall at the time (since climbers generally look out for each other and point out unsafe practices). But so many things must have aligned for the accident to happen: both climbers must have signed a declaration that they were experienced and recognised the risks; the harness in question had a gear loop at the centre of the back that they could mistake for a rope attachment point... but that loop wasn't strong enough to take the climber's weight; someone had supplied that harness to the climber without either providing clear instructions on how to put it on or checking that he knew...

So many factors: the climber and his partner apparently believed they were more expert than they actually were; the harness supplier (whether that was a vendor or a friend) didn't check that the climber knew how to use the equipment; there weren't other more expert climbers around to notice the error; the design of the harness had a usability vulnerability (a central loop that actually wasn't rated for a high load and could be mistaken for a rope attachment point); the wall's policy allowed people to self-certify as experienced without checking. Was anyone to blame? Surely not: this wasn't "an accident waiting to happen". But the system clearly wasn't as resilient as it might have been because when all these factors lined up, a young man had to be airlifted to hospital. I wish him well, and hope he makes a full recovery.

The wall has learnt from the incident and changed its admissions policy; hopefully, there will be other learning from it too to further reduce the likelihood of any similar incident occurring in the future. Safety is improved through learning, not through blaming.

Saturday 9 June 2012

Give me a little more time...

A few weeks ago, one of our PhD students, Amir Kamsin, was awarded 3rd prize in the student research competition  at CHI for his research on how we manage our time, and tools to support time management. Congratulations to Amir! The fact that it has taken until now to comment shows how difficult I am finding it to do things in a timely way. Many books and blogs (e.g. ProfSerious') have been written on how we should manage our time; it's difficult to even find the time to read them!

Some years ago, Thomas Green and I did a study of time management, and concluded that "what you get is not what you need". In that paper, we were focusing mainly on diary / calendar management and highlighted important limitations of online diaries, most of which are still true today (e.g. ways of marking meetings as provisional; including travelling time as well as meeting time; and making entries appropriately interpretable by others). In contrast, Amir is focusing on "to do" management. There are many aspects to his findings, of course. Two of them particularly resonate for me...

The first is how much of our time management is governed by emotional factors. It has long been a standing joke in my research group that you can tell when someone is avoiding doing a particular (usually big) job because they suddenly get ultra-productive on other tasks. The guilt about the big job is a great motivator! But I've become increasingly aware that there are even very small tasks that I avoid, either because I don't know where to start or because the first step is daunting. I've started to mentally label these as "little black clouds", and I'm gradually learning to prioritise them before they turn into big black clouds -- not necessarily by doing them immediately, but by committing to a time to do them. No "to-do" management systems that I'm aware of makes emotional factors explicit. Even their implementations of "importance" and "urgency" don't capture the fluidity of these ideas in practice. There's much more to managing tasks and projects than importance and urgency.

The second is how much "to do" information is tied up in email. Not just simple "hit reply" to-dos, but also complex discussions and decisions about projects. There are tools that integrate email, calendars and address books, and there are to-do management systems with or without calendars. But I really want a project management tool that integrates completely seamlessly with both my email and my calendar. And is quick and easy to learn. And requires minimal extra effort to manage. Anyone know of one?

Friday 1 June 2012

When is a qualitative study a Grounded Theory study?

I recently came across Beki Grinter's blog posts on Grounded Theory. These make great reading.

The term has been used a lot in HCI as a "bumper sticker" for any and every qualitative analysis regardless of whether or not it follows any of the GT recipes closely, and whether or not it results in theory-building. I exaggerate slightly, but not much. As Beki says, GT is about developing theory, not just about doing a "bottom up" qualitative analysis, possibly without even having any particular questions or aims in mind.

Sometimes, the questions do change, as you discover that your initial questions or assumptions about what you might find are wrong. This has happened to us more than once. For example, we conducted a study of London Underground control rooms where the initial aim was to understand the commonalities and contrasts across different control rooms, and what effects these differences had on the work of controllers, and the ways they used the various artefacts in the environment. In practice, we found that the commonalities were much more interesting than the contrasts, and that there were several themes that emerged across all the contexts we studied. The most intriguing was discovering how much the controllers seemed to be playing with a big train set! This links in to the literature on "serious games", a literature that we hadn't even considered when we started the study (so we had to learn about it fast!).

In our experience, there's an interdependent cycle between qualitative data gathering and analysis and pre-existing theory. You start with questions, gather and analyse some data, realise your questions weren't quite right, so modify them (usually to be more interesting!), gather more data, analyse it much more deeply, realise that Theory X almost accounts for your data, see what insights relating your data to Theory X provides, gather yet more data, analyse it further... end up with either some radically new theory or a minor adaptation of Theory X. Or (as in our study of digital libraries deployment) end up using Theory X (in this case, Communities of Practice) to make sense of the situations you've studied.

Many would say that a "clean" GT doesn't draw explicitly on any existing theories, but builds theory from data. In practice, in my experience, you get a richer analysis if you do draw on other theory, but that's not an essential part of GT. The important thing is to be reflective and critical: to use theory to test and shine light on your data, but not to succumb to confirmation bias, where you only notice the data that fits the theory and ignore the rest. Theory is always there to be overturned!

Friday 25 May 2012

Designing for "me"

The best designers seem to design for themselves. I just love my latest Rab jacket. I know Rab's not a woman, but he's a climber and he understands what climbers need. Most climbing equipment has been designed by climbers; in fact, I can't imagine how you would design good climbing gear without really understanding what climbers do and what they need. Designers need a dual skill set: to be great designers, and to really understand the context for which they are designing.

Shift your attention to interaction design. Bill Moggridge is recognised as a great designer, and he argues powerfully for the importance of intuition and design skill in designing good products. BUT he draws on examples where people could be designing for themselves. Designers who are also game-players can invoke intuition to design good games, for example. But judging by the design of most washing machine controls, few designers of these systems actually do the laundry! There seems to be a huge gulf between contexts where the designer is also a user, or has an intimate knowledge of the context of use, and contexts where the designer is an outsider.

It's often easy to make assumptions about other people's work, and about the nuances of their activities. You get over-simplifications that result in inappropriate design decisions. Techniques such as Contextual Inquiry are intended to help the design team understand the context of use in depth. But it's not always possible for the entire design team to immerse themselves in the context of use. Then you need some surrogates, such as rich descriptions that help the design team to imagine being there. Dourish presents a compelling argument against ethnographers having to present implications for design: he argues that it should be enough to provide a rich description of the context of use. His argument is much more sophisticated than the one I'm presenting here. Which is simply that it's impossible to reliably design for a situation you don't understand deeply. And for that, you need ways for people to become "dual experts" – in design, and in the situations for which they are designing.

Saturday 19 May 2012

When is a user like a lemon?

Discussing the design lifecycle with one of my PhD students, I found myself referring back to Don Norman's book on emotional design – in particular, to the cover picture of a Philippe Starck lemon squeezer. The evaluation criteria for a lemon squeezer are, I would guess, that it can be used to squeeze lemons (for which it probably needs to be tested with some lemons), that it can be washed, that it will not corrode or break quickly, and that (in this case, at least) it looks beautiful.

These evaluation criteria can be addressed relatively rapidly during the design lifecycle. You don't need to suspend the design process for a significant length of time to go and find a representative sample of lemons on which to test a prototype squeezer. You don't need to plan a complex lemon-squeezing study with a carefully devised set of lemon-squeezing tasks. There's just one main task for the squeezer to perform, and the variability in lemons is mercifully low.

In contrast, most interactive computer systems support a plethora of tasks, and are intended for use by a wide variety of people, so requirements gathering and user testing have to be planned as separate activities in the design of interactive systems. Yet even in the 21st century, this doesn't seem to be fully recognised. As we found in a study a few years ago, agile software development processes don't typically build in time for substantive user engagement (other than by involving a few user representatives in the development team). And when you come to the standards and regulations for medical devices, they barely differentiate between latex gloves and glucometers or interactive devices in intensive care. Users of interactive systems are apparently regarded as being as uniform and controllable as lemons: define what they should do, and they will do it. In our dreams! (Or maybe our nightmares...)

Monday 7 May 2012

Usable security and the total customer experience

Last week, I had a problem with my online Santander account. This isn't particularly about that company, but a reflection on a multi-channel interactive experience and the nature of evidence. When I phoned to sort out the problem, I was asked a series of security questions that were essentially "trivia" questions about the account that could only be answered accurately by being logged in at the time. I'd been expecting a different kind of security question (mother's maiden name and the like), so didn't have the required details to hand. Every question I couldn't answer made my security rating worse, and quite quickly I was being referred to the fraud department. Except that they would only ring me back within 6 hours, at their convenience, not mine. I never did receive that call because I couldn't stay in for that long. The account got blocked, so now I couldn't get the answers to the security trivia questions even though I knew that would be needed to establish my identity. Total impasse.

After a couple more chicken-and-egg phone calls, I gathered up all the evidence I could muster to prove my identity and went to a branch to resolve the problem face-to-face. I was assured all was fine, and that they had put a note on my account to confirm that I had established my credentials. But I got home and the account was still blocked. So yet another chicken-and-egg phone call, another failed trivia test. Someone would call me back about it. Again, they called when I was out. Their refusal to adapt to the customer's context and constraints was costing them time and money, just as it was costing me time and stress.

I have learned a lot from the experience; for example, enter these conversations with every possible factoid of information at your fingertips; expect to be treated like a fraudster rather than a customer... The telephone interaction with a human being is not necessarily any more flexible than the interaction with an online system; the customer still has to conform to an interaction style determined by the organisation.

Of course, the nature of evidence is different in the digital world from the physical one, where (in this particular instance) credible photo ID is still regarded as the Gold Standard, but being able to answer account trivia seems like a pretty poor way of establishing identity. As discussed last week, evidence has to answer the question (in this case: is the caller the legitimate customer?). A trivia quiz is not usable by the average customer until they have learned to think like security people. This difference in thinking styles has been recognised for many years now (see for example "Users are not the enemy"); we talk about interactive system design being "user centred", but it is helpful if organisations can be user centred too, and this doesn't have to compromise security, if done well. I wonder how long it will take large companies to learn?

Tuesday 1 May 2012

Seeing is believing?

In a recent interview, Mary Beard recounted a Roman joke: "A guy meets another in the street and says: 'I thought you were dead.' The bloke says: 'Can't you see I'm alive?' The first replies: 'But the person who told me you were dead is more reliable than you.'" She used the joke (apparently considered hilarious all those centuries ago) to illustrate a point about changing cultures and the nature of evidence. But the question of evidence is just as important in our work today. When are verbal reports a reliable form of evidence, and when do you need more direct forms of evidence? What can you learn from web analytics or the device log of an infusion pump? What does observing people tell you, as against interviewing them? Etc.

In general, device logs of any kind should tell you what happened, over a large number of instances, but they can't tell you anything much about the circumstances or the causes (what people thought they were doing, or what context they were in). So they give you an idea of where problems might lie, but not really what those problems are; they give quantity, but not necessarily quality.

Conversely, interviews and observations can potentially give quality, but not quantity. They have greater explanatory power; interviews are good for finding out people's perceptions (e.g. of why they behave in certain ways), and observations will give insights into the contexts within which people do things and the circumstances surrounding actions. Interviews may overlook details that people consider unremarkable, while observations may catch those details but not explain them. And of course the questions that are asked or the way an observational study is conducted will determine what data is gathered.

As I type this, most of it seems very self-evident, and yet people often seem to choose inappropriate data gathering methods that don't reliably answer the questions posed. I'll use an example from a researcher I have great respect for, and who is undeniably a leader in the field: Ever since I first read it, I have been perplexed by Jim Reason's analysis of photocopier errors – not because it is inconsistent with other studies, but because it is based entirely on retrospective self-reports. But our memories of past events are highly selective. I make errors every day, as we all do (see errordiary for both mundane and bizarre examples), but the ones I can recall later are the ones that were most embarrassing, most costly. most amusing or otherwise memorable. So what confidence can we have in retrospective reports as a way of measuring error? I don't know. And I don't think that's an admission of failure on my part; it's a recognition that retrospective self-report is an unreliable way of gathering data about human error. And that remains a challenge: to match research questions and data gathering and analysis methods appropriately.

Sunday 22 April 2012

Making sense of health information

A couple of people have asked me why I'm interested in patients' sensemaking, and what the problem is with all the health information that's available on the web. Surely there's something for everyone there? Well maybe there is (though it doesn't seem that way), but both our studies of patients' information seeking and personal experience suggest that it's far from straightforward.

Part of the challenge is in getting the language right: finding the right words to describe a set of symptoms can be difficult, and if you get the wrong words then you'll get inappropriate information. And as others have noted, the information available on the internet tends to be biased towards more serious conditions, leading to a rash of cyberchondria. But actually, diagnosis is only a tiny part of the engagement with and use of health information. People have all sorts of questions, such as "should I be worried?" or "how can I change my lifestyle?", and much more individual and personal issues, often not focusing on a single question but on trying to understand an experience, or a situation, or how to manage a condition. For example, there may be general information on migraines available, but any individual needs to relate that generic information to their own experiences, and probably experiment with trigger factors and ways of managing their own migraine attacks, gradually building up a personal understanding over time, using both external resources and individual experiences.

The literature describes sensemaking in different ways that share many common features. Key elements are that people:
  • look for information to address recognised gaps in understanding (and there can be challenges in looking for information and in recognising relevant information when it is found).
  • store information (whether in their heads or externally) for both immediate and future reference.
  • integrate new information with their pre-existing understanding (so sensemaking never starts from a blank slate, and if pre-existing understanding is flawed then it may require a radical shift to correct that flawed understanding).
One important element that is often missing from the literature is the importance of interpretation of information: that people need to explicitly interpret information to relate to their own concerns. This is particularly true for subjects where there are professional and lay perspectives, languages and concerns for the same basic topic. Not only do professionals and lay people (clinicians and patients in this case) have different terminology; they also have different concerns, different engagement, different ways of thinking about the topic.

Sensemaking is about changing understanding, so it is highly individual. One challenge in designing any kind of resource that helps people make sense of health information is recognising the variety of audiences for information (prior knowledge, kinds of concerns, etc.) and making it easy for people to find information that is relevant to them, as an individual, right here and now. People will always need to invest effort in learning: I don't think there's any way around that (indeed, I hope there isn't!)... but patients' sensemaking seems particularly interesting because we're all patients sometimes, and because making sense of our health is important, but could surely be easier than it seems to be right now.

Sunday 15 April 2012

The pushmepullyou of conceptual design

I've just been reading Jeff Johnson's and Austin Henderson's new book on 'conceptual models'. They say (p.18) that "A conceptual model describes how designers want users to think about the application." At first this worried me: surely the designers should be starting by understanding how users think about their activity and how the application can best support users?

Reading on, it's obvious that putting the user at the centre is important, and they include some compelling examples of this. But the question of how to develop a good conceptual model that is grounded in users' expectations and experiences is not the focus of the text: the focus is on how to go from that to an implementation. This is a very complementary approach to ours on CASSM, where we've been concerned with how to elicit and describe users' conceptual models, and then how to support them through design.

It seems to be impossible to simultaneously put both the user(s) and the technology at the centre of the discourse. In focusing on the users, CASSM is guilty of downplaying the challenges of implementation. Conversely, in focusing on implementation, Johnson and Henderson de-emphasise the challenges of eliciting users' conceptual models. These can seem, like the pushmepullyou from Dr Doolittle, to be pulling in opposite directions. But this text is a welcome reminder that conceptual models still matter in design.

Thursday 5 April 2012

KISS: Keep It Simple, Sam!

Tony Hoare is credited with claiming that... "There are two ways of constructing a software design; one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." Of course, he is focusing on software: on whether it is easy to read or test, or whether it is impossible to read (what used to be called "spaghetti code" but probably has some other name now), and impossible to devise a comprehensive set of tests for.

When systems suffer "feature creep", where they acquire more and more features to address real or imagined user needs, it's nigh on impossible to keep the code simple, so inevitably it becomes harder to test, and harder to be confident that the testing has been comprehensive. This is a universal truth, and it's certainly the case in the design of software for infusion devices. The addition of drug libraries and dose error reduction software, and the implementation of multi-function systems to be used across a range of settings for a variety of purposes, makes it increasingly difficult to be sure that the software will perform as intended under all circumstances. There is then a trade-off between delivering a timely system, or delivering a well designed and well tested system... or delivering a system that then needs repeated software upgrades as problems are unearthed. And you can never be sure you've really found all the possible problems.

These aren't just problems for the software: they're also problems for the users. When software upgrades change the way the system performs, it's difficult for the users to predict how it will behave. Nurses don't have the mental resources to be constantly thinking about whether they're working with the infusion device that's running version 3.7 of the software or the one that's been upgraded to version 3.8, or to anticipate the effects of the different software versions, or different drug libraries, on system performance. Systems that are already complicated enough are made even more so by such variability.

Having fought with several complicated technologies recently, my experience is not that they have no obvious deficiencies, but that those deficiencies are really, really hard to articulate clearly. And if you can't even describe a problem, it's going to be very hard to fix it. Better to avoid problems in the first place: KISS!

Saturday 24 March 2012

"Be prepared"

We're thinking a lot about resilience at the moment (what it is, what it is not, how it is useful for thinking about design and training). A couple of years ago, I went climbing on Lundy. Beautiful place, highly recommended, though prone to being wet. Lundy doesn't have a climbing equipment shop, so it's important that you have everything with you. And because most of the climbing is on sea cliffs, if you drop anything you're unlikely to be able to retrieve it. So take spares: that's recognising a generic vulnerability, and planning a generic solution. In particular, I had the foresight to take a spare belay plate (essential for keeping your partner safe while climbing). This is an anticipatory approach to resilience for the "known unknowns": first recognise a vulnerability, and then act to reduce the vulnerability.

It happened: when I was half way up the Devil's Slide, my partner pulled the rope hard just as I was removing it from the belay plate, and I lost my grip... and watched the belay plate bounce down the rock to a watery grave in the sea 30m below. That's OK: I had a spare. Except that I didn't: the spare was in my rucksack at the top of the cliff. Fortunately, though, I had knowledge: I knew how to belay using an Italian Hitch knot, so I could improvise with other equipment I was carrying and keep us safe for the rest of the climb. This is a different kind of resilience: having a repertoire of skills that can be brought to bear in unforeseen circumstances, and having generic tools (like bits of string, penknives, and the like) that can be appropriated to fit unexpected needs.

This is a "boy scout" approach to resilience: for the "unknown unknowns" that cannot be anticipated, it's a case of having skills that can be brought to bear to deal with the unforeseen situation, and tools that can be used in ways that they might not have been designed for.

Thursday 15 March 2012

Undies in the safe

Some time ago, I went to a conference in Konstanz. I put a few item in the room safe (££, passport, etc.)... and forgot to remove them when I checked out. Oops! Rather inconvenient!

This week, I've been in Stuttgart. How to make use of the room safe while also being sure to remember those important items when I check out? Solution: put my clean underwear for the last day in the safe with the higher-value items. No room thief would be interested in the undies, but I'm not going to leave without them, am I? That worked! It's an example of what we're currently calling a "resilient strategy": we're not sure that that's the right term, so if you (the reader) have better ideas, do let me know. Whatever the word, the important idea is that I anticipated a vulnerability to forgetting (drawing on the analogy of a similar incident) and formulated a way of reducing the likelihood of forgetting, by co-locating the forgettable items with some unforgettable ones.

The strategy worked even better than expected, though, because I told some people about what I'd done (to illustrate a point about resilience) while at the conference. And on my last evening, I was in the lift with another attendee. His parting words were: "don't forget your knickers!" In other situations, that could have been embarrassing; in the context, it raised some smiles... and acted as a further external memory aid to ensure that I remembered not just my clothing, but also the passport and sterling cash that I'd been storing in the safe. Other people engaging with a problem can make the system so much more resilient too!

A black & white regulatory world

I've just come home from MedTec Europe. The Human Factors stream was very interesting, with some great talks. However, the discussion focused largely on safety, error and legislation. This focus is important, but if it becomes the sole focus then all innovation is stifled. All everyone will do is to satisfy the requirements and avoid taking risks.

While it is a widespread and widely agreed aim of all medical interventions to “do no harm”, any intervention carries some small risk of harm, and medical progress requires that we accept those risks. So “no harm” has to be balanced by “where possible, do good” (where “good” is difficult to measure, though concepts such as Quality Life Years, or QaLYs, try to capture this idea). Without risk, we would have no interventions – no medical profession, no drugs, no treatments. That is unimaginable. So we need to have mature debate about acceptable risk. The world is not black-and-white… but every new piece of regulation reduces the shades of grey that are acceptable.

Imagine that universities changed their assessment to a simple pass or fail. What information does this give the future employers of our students about which are likely to perform well? Of course, academic excellence isn't the only assessment criterion, but if it's not a criterion at all then why do we assess it? More to the point, if it became a simple pass-fail, what motivation would there be for students to excel? The canny student would do the minimum to pass, and enjoy themselves (even) more. The pass-fail shows whether work basically conforms to requirements or not. The more detailed grading gives an indication of how well the work performs: work that was awarded a mark of 91% has been assessed as being of substantially higher quality than work that was awarded 51% even though both have passed. Even this is a fairly blunt instrument, and I am certainly not suggesting that medical devices be graded on a scale of 1 to 100. Quite apart from anything else, the best inhaler on the market for a young boy suffering from asthma is unlikely to also be the most appropriate for an elderly lady suffering from COPD.

Regulation is a very blunt instrument, and needs to be used with care. We also need to find ways to talk about the more complex (positive) qualities of next-generation products: risks are important, but so are benefits.

Saturday 10 March 2012

Attitudes to error in healthcare: when will we learn?

In a recent pair of radio programmes, James Reason discusses the possibility of a change in attitude in the UK National Health Service regarding human error and patient safety. The first programme focuses on experiences in the US, where some hospitals have shifted their approach towards open disclosure, being very open about incidents with the affected patients and their families. It shouldn't really be a surprise that this has reduced litigation and the size of payouts, as families feel more listened to and recognise that their bad experience has at least had some good outcome in terms of learning, to reduce the likelihood of such an error happening again.

The second programme focuses more on the UK National Health Service, on the "duty of candour" and "mandatory disclosure", and the idea of an open relationship between healthcare professional and patients. It discusses the fact that the traditional secrecy and cover-ups lead to "secondary trauma", in which patients' families suffer from the silence and the frustration of not being able to get to the truth. There is of course also a negative effect on doctors and nurses who suffer the guilt of harming someone who had put their trust in them. It wasn't mentioned in the programme, but the suicide of Kim Hiatt is a case in point.

A shift in attitude requires a huge cultural shift. There is local learning (e.g. by an individual clinician or a clinical team) that probably takes effect even without disclosure, provided that there is a chance to reflect on the incident. But to have a broader impact, the learning needs to be disseminated more widely. This should lead to changes in practice, and also to changes in the design of technology and protocols for delivering clinical care. This requires incident reporting mechanisms that are open, thorough and clear. Rather than focusing on who is "responsible" (with a subtext that that individual is to blame), or on how to "manage" an incident (e.g. in terms of how it gets reported by the media), we will only make real progress on patient safety by emphasising learning. Reports of incidents that lay blame (e.g. the report on an unfortunate incident in which a baby received an overdose) will hardly encourage greater disclosure: if you fear blame then the natural reaction is to clam up. Conversely, though, if you clam up then that tends to encourage others to blame: it becomes a vicious cycle.

As I've argued in a recent CS4FN article, we need a changed attitude to reporting incidents that recognises the value of reporting for learning. We also need incident reporting mechanisms that are open and effective: that contain enough detail to facilitate learning (without compromising patient or clinician confidentiality), and that are available to view and to search, so that others can learn from every unfortunate error. It's not true that every cloud has a silver lining, but if learning is effective then it can be the silver lining in the cloud of each unfortunate incident.

Sunday 26 February 2012

Ordering wine: the physical, the digital and the social

For a family birthday recently, we went to Inamo. This is not a restaurant review, but reflections on an interactive experience.

Instead of physical menus and a physical waiter, each of us had a personal interactive area on the tabletop that we used to send our individual order to the kitchen and do various other things. In some ways this was great fun (we could have "tablecloth wars" in which we kept changing the decor on the table, or play games such as Battleships across the table).

In other ways it was quite dysfunctional. For example, we had to explicitly negotiate about who was going to order bottles of water and wine because otherwise we'd have ended up with either none or 5 bottles. In most restaurants, you'd hear whether it's been ordered yet or not, so you know how to behave when it's your turn to order. But it's more subtle than that: whereas with physical menus people tend to hold them up so that they are still "in the space" with their party, with the tabletop menus people were heads-down and more engrossed in ordering from the menu than the company, and there was no external cue (the arrival of the waiter) to synchronise ordering. So the shift from the physical to the digital meant that some activities that used to be seamless have now become seamful and error-prone. The human-human coordination that is invisible (or seamless) in the physical world has to be made explicit and coordinated in the digital. Conversely, the digital design creates new possibilities that it would be difficult to replicate in the physical implementation.

There is a widespread belief that you can take a physical activity and implement a digital solution that is, in all respects, the same or better. Not so: there are almost always trade-offs.

Saturday 18 February 2012

Device use in intensive care

Atish Rajkomar's study of how infusion devices are used in intensive care has just been accepted for publication in the Journal of Biomedical Informatics: a great outcome from an MSc project!

It's a great achievement for someone without a clinical background to go into such a complex clinical environment and make sense of anything that's going on there. The Distributed Cognition approach that Atish took seems to have been a help, providing a way of looking at the environment that focuses attention on some of the things that matter (though maybe overlooking other things in the process). But this is a difficult thing to prove!

It's one of the real challenges for the design of future healthcare technologies: that to design effectively, the design team really does need dual expertise: in technology design and in clinical work. There are few courses available that provide such dual expertise. And also surprisingly few people seem to be interested in acquiring such expertise. Therein lies another challenge: how to make healthcare technologies interesting and engaging?

Sunday 29 January 2012

Serendipity: time, space and connections

Someone recently brought http://memex.naughtons.org/archives/2012/01/26/15216 to my attention. Quite apart from it being entertaining, it resonates well with our work on understanding serendipity (www.serena.ac.uk). Historically, work on serendipity has emphasised the encountering of information (often while looking for other information) and the importance of the "prepared mind" in recognising the value of the encountered information. This video (by Steven Johnson) emphasises the importance of "slow hunches" and connections. It highlights the dilemma that, with so many sources of information available to people, it can be difficult not to feel overwhelmed by information, and by demands to deliver results quickly -- and yet there are many more opportunities for identifying and exploiting new connections in our highly connected world. Our work on serendipity has highlighted the need to go beyond recognising the value of the connection to having the time and opportunity to exploit it. This requires "mental space" for reflecting on the nature and value of the connection, as well as the sense of freedom to follow up on it. Johnson claims that "chance favours the connected mind", but it also favours the mind that experiences the freedom to build on opportunities, that is not overwhelmed by demands.