Sunday, 12 May 2013

Engineering for HCI: Upfront effort, downstream pay-back

The end of Engineering Practice 1 (c.1980).
Once upon a time, I was a graduate trainee at an engineering company. The training was organised as three-month blocks in different areas of the company. My first three months was on the (work)shop floor. Spending hours working milling machines and lathes was a bit of shock after studying mathematics at Cambridge. You mean it is possible to use your body as well as your mind to solve problems?!?
I learned that engineering was about the art of the possible (e.g. at that time you couldn't drill holes that went around corners, though 3D printing has now changed our view of what is possible). And also about managing precision: manufacturing parts that were precise enough for purpose. Engineering was inherently physical: about solving problems by designing and delivering physical artefacts that were robust and reliable and fit for purpose. The antithesis of the "trust me, I'm an engineer" view (however much that makes me smile).

Enter "software engineering": arguably, this term was coined to give legitimacy to a certain kind of computer programming. Programming was (and often still is) something of a cottage industry: people building one-off systems that seem to work, but no-one is quite sure of how, or when they might break down. Engineering is intended to reduce the variability and improve the reliability of software systems. And deliver systems that are fit for purpose.

So what does it mean to "engineer" an interactive computer system? At the most recent IFIP Working Group 2.7/13.4 meeting, we developed a video: 'Engineering for HCI: Upfront effort, downstream pay-back'. And it was accepted for inclusion in the CHI2013 Video Showcase. Success! Preparing this short video turned out to be even more difficult than I had anticipated. There really didn't seem to be much consensus on what it means to "engineer" an interactive computer system. There is general agreement that it involves some rigour and systematicity, some use of theory and science to deliver reproducible results, but does the resulting system have to be usable, to be fit for purpose? And how would one measure that? Not really clear.

I started by saying that I once worked for an engineering company. That term is probably fairly unambiguous. But I've never heard of an "interactive systems engineering company" or an "HCI engineering company". I wonder what one of those would look like or deliver.

Saturday, 27 April 2013

When I get older: the uncountable positives


Last week, I was at a presentation by John Clarkson. It was a great talk: interesting, informative, thought provoking… Part-way through it, to make a point about the need for accessible technology, he presented a set of graphs showing how human capabilities decline with age. Basically, vision, hearing, strength, dexterity, etc. peak, on average, in the 20s, and it’s downhill all the way from there. It is possible that only two measurable values increase with age: age itself and grumpiness!

So this raises the obvious question: if we peak on every important variable when we’re in our 20s, why on earth aren’t most senior roles (Chief Executive, President, etc.) held by people in their 20s? Is this because grumpiness is in fact the most important quality, or is it because older people have other qualities that make them better suited to these roles? Most people would agree that it’s the latter.

The requisite qualities are often lumped under the term “wisdom”. I’m not an expert on wisdom, but I imagine there’s a literature defining and decomposing this concept to better understand it. One thing’s for sure though: it can’t be quantified in the way that visual or auditory acuity, strength, etc. can. The things that matter most for senior roles are not easily quantified.

We run a risk, in all walks of life, of thinking that if it can’t be measured then it has no value. In research we see it repeatedly in the view that the “gold standard” for research is controlled (quantifiable) experiments, and that qualitative research is “just stories”. In healthcare, this thinking manifests itself in many ways: in measures of clinical effectiveness and other outcome measures. In HCI, it manifests itself in the weight put on efficiency: of course, efficiency has its place (and we probably all have many examples of inefficient, frustrating interfaces), but there are many cases where the less easily measured outcomes (the quality of a search, the engagement of a game) are much more important.

As vision, hearing, memory, etc. decline, I'm celebrating wisdom and valuing the unmeasurable. Even if it can sound like "just stories'.

Friday, 26 April 2013

Who's the boss? Time for a software update...

Last summer, I gave a lift to a couple of friends to a place I was unfamiliar with. So I used a SatNav to help with the navigation. It was, of course, completely socially unaware. It interrupted our conversation repeatedly, without any consideration for when it is and is not appropriate to interrupt. No waiting for pauses in the conversation. No sensitivity to the importance of the message it was imparting. No apology. Standard SatNav behaviour. And indeed it’s not obvious how one would design it any other way. We turned off the sound and relied solely on the visual guidance after a while.

More recently, a colleague started up his computer near the end of a meeting, and it went into a cycle of displays: don’t turn me off; downloading one of thirty three. I took a record of the beginning of this interaction, but gave up and left way before the downloading had finished.
It might have been fine to pull the plug on the downloading (who knows?) but it wasn’t going to be a graceful exit. The technology seemed to be saying: “You’ve got to wait for me. I am in control here.” Presumably, the design was acceptable for a desktop machine that could just be left to complete the task, but it wasn’t for a portable computer that had to be closed up to be taken from the meeting room.

I have many more examples, and I am sure that every reader does too, of situations where the design of technology is inappropriate because the technology is unaware of the social context in which it is placed, and the development team have been unwilling or unable to make the technology better fit that context.

Saturday, 23 March 2013

"How to avoid mistakes in surgery": a summary and commentary

I've just returned from the US, and my one "must see" catch-up TV programme was "How to avoid mistakes in surgery" (now available on youtube). It's great to see human error in healthcare getting such prominent billing, and being dealt with in such an informative way. This is a very quick synopsis (of the parts I particularly noted).

The programme uses the case of Elaine Bromiley as the starting point and motivation for being concerned about human error in healthcare. The narrator, Kevin Fong, draws on experience from other domains including aviation, firefighting and formula one pit-stops to propose ways to make surgery and anaesthesia safer. Themes that emerge include:
  • the importance of training, and the value of simulation suites (simlabs) for setting up challenging scenarios for practice. This is consistent with the literature on naturalistic decision making, though the programme focuses particularly on the importance of situational awareness (seeing the bigger picture).
  • the value of checklists for ensuring that basic safety checks have been completed. This is based on the work of Atul Gawande, and is gaining recognition in UK hospitals. It is claimed that checklists help to change power relationships, particularly in the operating theatre. I don't know whether there is evidence to support this claim, but it is intuitively appealing. Certainly, it is important in operating theatres, just as it has been recognised as being important in aviation
  • the criticality of handovers from the operating theatre to the intensive care unit. This is where the learning from F1 pitstops comes in. It's about having a system and clear roles and someone who's in charge. For me, the way that much of the essential technology gets piled on the bed around the patient raised a particular question: isn't there a better way to do this?
  • dealing with extreme situations that are outside anything that has been trained for or anticipated. The example that was used for this was the Hudson River plane incident; ironically, on Thursday afternoon, about the time this programme first broadcast, Pete Doyle and I were discussing this incident as an example that isn't really that extreme, because the pilot has been explicitly trained in all the elements of the situation, though not in the particular combination of them that occurred that day. There is a spectrum of resilient behaviour, and this is an example of well executed behaviour, but it's not clear to me that it is really "extreme". The programme refers to the need to build a robust, resilient safety system. Who can disagree with this? It advocates an approach of "standardise until you have to improvise". This is true, but this could miss an important element: standardisation, done badly, reduces the professional expertise and skill of the individual, and it is essential to enhance that expertise if the individual is to be able to improvise effectively. I suspect that clinicians resist checklists precisely because it seems to reduce their professional expertise, when in fact it should be liberating them to develop their expertise at the "edges", to deal better with the extreme situations. But of course that demands that clinical professional development includes opportunities and challenges to develop that expertise. That is a challenge!
The programme finishes with a call to learn from mistakes, to have a positive attitude to errors. Captain Chesley 'Sully' Sullenberger talks about "lessons bought with blood", and about the "moral failure of forgetting these mistakes and having to re-learn them". `On the basis of our research to date, and of discussions with others in the US and Canada studying incident reporting and learning from mistakes, this remains a challenge for healthcare.

Monday, 4 March 2013

Ethics and informed consent: is "informed" always best?

I am in the US, visiting some of the leading research groups studying human factors, patient safety and interactive technologies. This feels like "coming home": not in the sense that I feel more at home in the US than the UK (I don't), but in that these groups care about the same things that we do – namely, the design, deployment and use of interactive medical devices. Talking about this feels like a constant uphill struggle in the UK, where mundane devices such as infusion pumps are effectively "invisible".

One of the issues that has exercised me today is the question of whether it is always ethical to obtain informed consent from the patients who are receiving drugs via infusion devices. The group I'm working with here in Boston have IRB (Institutional Review Board, aka Ethics Board) clearance to just obtain informed consent from the lead nurse on the ward where they are studying the use of devices. Not even from all the nurses, never mind the patients. In one of our studies, we were only allowed to observe a nurse programming a device in the middle of the night if we had obtained permission to observe from the patient before they had fallen asleep (which could have been several hours earlier). Even though we were not gathering any patient data or disturbing the patient in any way. In fact, we were probably disturbing the patient more by obtaining informed consent from them than we would have been just observing the programming of the pump without their explicit knowledge.

We recently discussed the design of a planned study of possible errors with infusion devices with patient representatives. Feedback we got from one of them was: "patients and relatives need to have complete confidence in the staff and equipment, almost blind faith in many instances." There are times when ensuring that patients are fully informed is less important than giving them reassurance. The same is true for all of us when we have no control over the situation.

File:Virgatl.a340-300.g-vfar.800pix.jpgOn the flight on the way here, there was an area of turbulence where we all had to fasten our seatbelts. That's fine. What was less fine what the announcement from the pilot that we shouldn't be unduly worried about this (the implication being that we should be a little bit worried): as a passenger in seat 27F, what use was it for me to worry? No idea! It made the flight less comfortable for me, to no obvious benefit (to me or anyone else).

Similarly with patients: if we accept that studying the use of medical devices has potential long-term benefits, we also need to review how we engage patients in the study. Does obtaining informed consent give them benefits or whatever-is-the-opposite? Maybe there are times where the principle of "blind faith" should dominate.

Friday, 15 February 2013

The information journey and information ecosystems

Last year, I wrote a short piece for "Designing the search experience". But I didn't write it short enough (!) so it got edited down to a much more focused piece on serendipity. Which I won't reproduce here for copyright reasons (no, I don't get any royalties!). The theme that got cut was on information ecosystems: the recognition that people are encountering and working with information resources across multiple modalities the whole time. And that well designed information resources exploit that, rather than being stand-alone material. OK, so this blog is just digital, but it draws on and refers out to other information resources when relevant!

Here is the text from the cutting room floor...

The information journey presents an abstract view of information interaction from an individual’s perspective. We first developed this framework during work studying patients’ information seeking; the most important point that emerged from that study was the need for validation and interpretation. Finding information is not enough: people also need to be able to assess the reliability of the information (validation) and relate it to their personal situation and needs (interpretation).

This need for validation and interpretation had not been central to earlier information seeking models—possibly because earlier studies had not worked with user groups (such as patients) with limited domain knowledge, nor focused on the context surrounding information seeking. But we discerned these validation and interpretation steps in all of our studies: patients, journalists, lawyers and researchers alike.

The information journey starts when an individual either identifies a need (a gap in knowledge) or encounters information that addresses a latent need or interest. Once a need has been identified, a way to address that need must be determined and acted upon, such as asking the person at the next desk, going to a library, looking “in the world,” or accessing internet resources. On the web, that typically means searching, browsing, and follow trails of “information scent”. Often finding information involves several different resources and activities. These varied sources create an information ecosystem of digital, physical and social resources.

Information encountered during this journey needs to be validated and interpreted. Validation is often a loose assessment of the credibility of the information. Sillence and colleagues highlight important stages in the process: an early and rapid assessment—based on criteria such as the website’s design and whether it appears to be an advertising site—is typically followed by a more deliberate analysis of the information content, such as assessing whether it is consistent with other sources of information.
 
Interpretation is not usually straightforward. It often involves support from information intermediaries (an important part of the information ecosystem). This is one of the important roles of domain specialists (e.g. doctors and lawyers): working with lay people to interpret the “facts” in the context of the actual, situated needs. Even without help from intermediaries, Sillence & co. describe the lay users of health information in their study as acting like scientists, generating and testing hypotheses as they encountered new information resources, both online and offline. No one information resource is sufficient: online information fits in a broader ecology of information sources which are used together, albeit informally, to establish confidence and build understanding.
 
The interpretation of information can often highlight further gaps in understanding. So one information need often leads to others. For example, a colleague of mine was recently planning to buy a Bluetooth headset. His initial assumption was that there were only a few suitable headsets on the market, and his aim was simply to identify the cheapest; but it quickly became apparent that there were hundreds of possible headsets, and that he first needed to understand more about their technical specifications and performance characteristics to choose one that suited his needs. A simple information problem had turned into a complex, multi-faceted one. A known item search had turned into an exploratory search, and the activity had turned from fact-finding to sensemaking.

Information resources surround us. We are informavores, consuming and interpreting information across a range of channels. We are participants in huge information ecosystems, and new information interaction technologies need to be designed not just to work well on their own, but to be valuable components of those ecosystems.

Thursday, 14 February 2013

The importance of context (even for recognising family!)

I've been using the face recognition feature in my photograph management software. It was coming up with some suggestions that were pretty impressive (e.g. finding several additional photos that featured my mother, when primed with a few) and some that felt a little spooky (e.g. suggesting that a photo of me was actually of my mother – something that probably none of us wants to admit to, however attractive the parent). But it was also making some inexplicably bizarre suggestions – e.g. that a male colleague might be one of my daughters, or that a wine glass might be a face at all. This recognition technology is getting very sophisticated, but it clearly does not recognise faces in a human-like way!


From a computational perspective, it does not account for context: it identifies and matches features that, in some low-level way, correspond to "face", and it gets that right a lot of the time, identifying real human faces and artificial faces (such as a doll). However, it does not have the background knowledge to do the gender- and age-based reasoning that people naturally do. This makes some of its suggestions seem bizarre. And the fact that it works with low-level features of an image is really exposed when it suggests that a wineglass should be named.

From a human perspective, context also matters in recognition. For most adult faces that were close friends or relations, recognition was generally straightforward, but for children or less familiar people, it was almost impossible to recognise people out of context. The particular software I was using did not allow me to switch easily between detail and context, so there are some faces that are, and will remain, unlabelled, meaning that I won't be able to find them again easily. For example, with context, it was instantly apparent who this small child was: she was sat on her (recognisable) mother's knee, with her big sister at her side. But without that context, she is a small (and slightly uncomfortable-looking) blonde toddler. Context matters.