Monday, 7 October 2013

Cultural heritage: sense making and meaning making


Last week, I was presenting at the workshop on Supporting Users' Exploration of Digital Libraries in Malta. One of the themes that came up was the relationship between meaning making and sense making. These seem to be two literatures that have developed in parallel without either referencing the other. Sense making is studied in the broad context of purposeful work (e.g. studying intelligence analysts working with information, photocopier engineers diagnosing problems, or lawyers working on a legal matter). Meaning making is discussed largely within museum studies, where the focus is on how to support visitors in constructing meaning during their visit. Within a cultural heritage context (which was an important focus for the workshop), there is a tendency to consider both, but it is difficult to clearly articulate their relationship.

Paula Goodale suggested that it might be concerned with how personally relevant the understanding is. This is intuitively appealing. For example, when I was putting together a small family tree recently, using records available on the internet, I came across the name Anna Jones about 4 generations back, and immediately realized that that name features in our family Bible. She's "Anna Davies" on the cover, but "Anna Jones" in the family tree inside. I had not known exactly how Anna and I are related, and the act of constructing the family tree made her more real (more meaningful) to me.




The same can clearly be true for family history resources within a cultural heritage context. But does it apply more broadly in museum curation work?
Following the workshop, we visited St Paul’s Catacombs in Rabat (Malta). 

The audio guide was pretty good for helping to understand the construction of the different kinds of tombs and the ceremonies surrounding death and the commemoration of ancestors. But was this meaning making? I’d say probably not, because it remained impersonal – it has no particular personal meaning for me or my family – and also because although I was attentive and walked around and looked at things as directed, I did not actively construct new meaning beyond what the curatorial team had invested in the design of the tour. Similarly, it wasn’t sense making because I had no personal agenda to address and didn’t actively construct new understanding for myself. So – according to my understanding – sense making and meaning making both require very active participation, beyond the engagement that may be designed or intended by educationalists or curators. They can design to enhance engagement and understanding, but maybe not to deeply influence sense making or meaning making. That is much more personal.

Monday, 16 September 2013

Affordance: the case of door closing

Last week, I was at (yet another) hotel. In the Ladies' (and presumably also the Gents'), the doors had door-plates on the inside, which facilitated pushing but not pulling. Within HCI, this is often referred to as the object affording a particular action. See, for example, work by Gaver and Hartson. In fact this example goes further than affording: it determines what is physically possible. In the case of doors, the assumption is that on one side you expect to pull and on the other you expect to push.

The problem was that in this case the door hinge was very simple: the door did not automatically close. So the only way to close the cubicle door was to pull on the small handle that was designed as a lock (that afforded turning but not pulling). The assumption behind having a plate on one side and a handle on the other is that there is a default position for the door, which could have been achieved if the "system" (aka the hinge) was set up to automatically close the door. But it didn't. In this case, the user has to both pull and push the door to get it to the desired positions -- and yes, privacy is valued by most of us in this situation, so most do want to be able to close the door as well as open it!

I've previously commented that we seem to be unable to design interactive devices as simple as taps; it seems that this extends even to doors... and I don't think interactions get much simpler than this.

Friday, 6 September 2013

The look of the thing matters

Today, I was at a meeting. One of the speakers suggested that the details of the way information is displayed in an information visualisation doesn't matter. I beg to differ.

The food at lunchtime was partly finger-food and partly fork-food. Inevitably, I was talking with someone whilst serving myself, but my attention was drawn to the buffet when a simple expectation was violated. The forks looked like this:

 ...so I expected them to be weighty and solid. But the one I picked up felt like this:

– i.e., insubstantial and plastic. The metallic look and the form gave an appearance that didn't match reality.

I remember a similar feeling of being slightly cheated when I first received a circular letter (from a charity) where the address was printed directly onto the envelope using a handwriting-like font and with a "proper" stamp (queen's head and all that). Even though I didn't recognise the handwriting, I immediately expected a personal letter inside – maybe an invitation to a wedding or a party. But no: an invitation to make a donation to the charity. That's not exciting.

The visual appearance of such objects introduces a dissonance between expectation and fact, forcing us to shift from type 1 (fast, intuitive) thinking to type 2 (slow, deliberate) thinking. As the fork example shows, it's possible to create this kind of dissonance in the natural (non-digital) world. But it's much, much easier in the digital world to deliberately or accidentally create false expectations. I'm sure I'm not the only person to feel cheated when this happens.

Tuesday, 20 August 2013

Hidden in full view: the daft things you overlook when designing and conducting studies

Several years ago, when Anne Adams and I were studying how people engaged with health information, we came up with the notion of an "information journey", with three main stages: recognising an information need; gathering information and interpreting that information. The important point (to us) in that work was highlighting the important of interpretation: the dominant view of information seeking at that time was that if people could find information then that was job done. But we found that an important role for clinicians is in helping lay people to interpret clinical information in terms of what it means for that individual – hence our focus on interpretation.

In later studies of lawyers' information work, Simon Attfield  and I realised that there were two important elements missing from the information journey as we'd formulated it: information validation and information use. When we looked back at the health data, we didn't see a lot of evidence of validation (it might have been there, but it was largely implicit, and rolled up with interpretation) but – now sensitised to it – we found lots of evidence of information use. Doh! Of course people use the information – e.g. in subsequent health management – but we simply hadn't noticed it because people didn't talk explicitly about it as "using" the information. Extend the model.

Wind forwards to today, and I'm writing a chapter for InteractionDesign.org on semi-structured qualitative studies. Don't hold your breath on this appearing: it's taking longer than I'd expected.

I've (partly) structured it according to the PRETAR framework for planning and conducting studies:
  • what's the Purpose of the study?
  • what Resources are available?
  • what Ethical considerations need to be taken into account?
  • what Techniques for data gathering?
  • how to Analyse data?
  • how to Report results?
...and, having been working with that framework for several years now, I have just realised that there's an important element missing, somewhere between resources and techniques for data gathering. What's missing is the step of taking the resources (which define what is possible) and using them to shape the detailed design of the study – e.g., in terms of interventions.

I've tended to lump the details of participant recruitment in with Resources (even though it's really part of the detailed study design), and of informed consent in with Ethics. But what about interventions such as giving people specific tasks to do for a think-aloud study? Or giving people a new device to use? Or planning the details of a semi-structured interview script? Just because a resource is available, that doesn't mean it's automatically going to be used in the study, and all those decisions – which of course get made in designing a study – precede data gathering. I don't think this means a total re-write of the chapter, but a certain amount of cutting and pasting is about to happen ...

Tuesday, 13 August 2013

Wizard of Oz: the medium and the message

Last week, one of my colleagues asserted that it didn't matter how a message was communicated – that the medium and the message were independent. I raised a quizzical eyebrow. A few days previously, I'd been in Vancouver, and had visited the Museum of Anthropology. It's a delightful place: some amazing art and artefacts from many different cultures. Most of them relate to ceremony and celebration, rather than everyday life, but they give a flavour of people's cultures, beliefs and practices. And most of them are beautiful.

One object that caught my attention was a yakantakw, or "speaking through post". According to the accompanying description: "A carved figure such as this one, with its prominent, open mouth, was used during winter ceremonies. A person who held the privilege of speaking on behalf of the hosts would conceal himself behind the figure, projecting his voice forward. It was as though the ancestor himself was calling to the assembled guests." This particular speaking through post dates from 1860, predating the Wizard of Oz by about 40 years.

In HCI, we talk about "Wizard of Oz experiments" in which participants are intended to believe that they are interacting with a computer system when in fact they are interacting with a human being who is hiding behind that system. It matters that people think that they are interacting with a computer rather than another human being. The analogy with the Wizard of Oz is quite obvious. But is looks like the native people in that region beat L. Frank Baum to the idea, and we should really be calling them "Yakantakw experiments". Just as soon as soon as we Western people learn to pronounce that word.

Thursday, 18 July 2013

When reasoning and action don't match: Intentionality and safety

My team have been discussing the nature of “resilient” behavior, the basic idea being that people develop strategies for anticipating and avoiding possible errors, and creating conditions that enable them to recover seamlessly from disturbances. One of the examples that is used repeatedly is leaving one’s umbrella by the door as a reminder to take it when going out in case of rain. Of course, getting wet doesn’t seriously compromise safety for most people, but let’s let that pass: its unpleasant. This presupposes that people are able to recognize vulnerabilities and identify appropriate strategies to address them. Two recent incidents have made me rethink some of the presuppositions.

On Tuesday, I met up with a friend. She had left her wallet at work. It had been such a hot day that she had taken it out of her back pocket and put it somewhere safe (which was, of course, well hidden). She recognized that she was likely to forget it, and thought of ways to remind herself: leaving a note with her car keys, for instance. But she didn’t act on this intention. So she had done the learning and reflection, but it still didn’t work for her because she didn’t follow through with action.

My partner occasionally forgets to lock the retractable roof on our car. I have never made this mistake, but wasn’t sure why until I compared his behavior with mine. It turns out he is more relaxed than I am, and waits while the roof closes before taking the next step, which is often to close the windows, take the keys out of the lock and get out of the car. I, in contrast, am impatient. I can’t wait to lock the roof as it closes, so as the roof is coming over, my arm is going up ready to lock it. So I never forget (famous last words!): the action is automatised. The important point in relation to resilience is that I didn’t develop this behavior in order to keep the car safe or secure: I developed it because I assumed that the roof needed to be secured and I wanted it to happen as quickly as possible. So it is not intentional, in terms of safety, and yet it has the effect of making the system safer.

So what keeps the system safe(r) is not necessarily what people learn or reflect on, but what they act on. This is, of course, only one aspect of the problem; when major disturbances happen, it’s almost certainly more important to consider people’s competencies and knowledge (and how they acquired them). To (approximately) quote a London Underground controller: “We’re paid for what we know, not what we do”. Ultimately, it's what people do that matters in terms of safety; sometimes that can be clearly traced to what they know and sometime it can't.


Saturday, 13 July 2013

Parallel information universes

A few years ago, a raised white spot developed on my nose. It's not pretty, so I'm not going to post a picture of it. I didn't worry about it for a while; tried to do internet searching to work out what is was and whether I should do anything about it.

A search for "raised white spot on skin" suggested that "sebrrheic keratosis" was the most likely explanation. But I did an image search on that term and it was clearly wrong: wrong colour, wrong texture, wrong size...

"One should visit a doctor immediately when this signs arise": ignoring the grammatical problem in that advice, I booked an appointment with my doctor. She assured me that there is nothing to worry about -- that it is an "intradermal naevus", that there would be information about it on dermnetnz.org. Well, actually, no: information on Becker naevus (occurs mostly in men, has a dark pigment); on Sebaceous naevus (bright pink, like birth marks), Blue naevus (clue is in the colour)... and many other conditions that are all much more spectacular in appearance than a raised white spot. I find pages of information including words ending in "oma": melanoma, medulloblastoma, meningioma, carcinoma, lymphoma, fibroma. If the condition is serious, there is information out there about it. But the inconsequential? Not a lot, apparently. Contrary to my earlier belief, knowing the technical terms doesn't always unlock the desired information.

Look further. I find information on a patient site. But it's for healthcare professionals:  "This is a form of melanocytic naevus [...] The melanocytes do not impart their pigmentation to the lesion because they are located deep within the dermis, rather than at the dermo-epidermal junction (as is the case for junctional naevi/compound naevi)." I feel stupid: I have a PhD, but it's not in medicine or dermatology, and I have little idea what this means.

I eventually work out that naevus or nevi is another term for mole. I try searching for "white mole" and find general forums (as well as pictures of small furry creatures who dig). The forums describe something that sounds about right. But lacks clinical information, on causes or treatment or likely developments without treatment.

At that point, I give up. Lay people and clinicians apparently live in parallel universes when it comes to health information. All the challenges of interdisciplinary working that plague research projects also plague other interactions – at least when it comes to understanding white moles that are not cancerous and don't eat worms for breakfast.