Sunday, 4 March 2018

How not to design the user experience: update 2018

In November 2014, I wrote a summary of my experience of entering research data in Researchfish. Since then, aspects of this system have improved: at least some of the most obvious bugs have been ironed out, and being able to link data from ORCID make one tedious aspect of the task (entering data about particular publications) significantly easier. So well done to the ResearchFish team on fixing those problems. It's a pity it's still not fit for purpose, despite the number of funders who are supporting (mandating) use of this system.

The system is still designed without any consideration of how people conceptualise their research outputs – or at least, not how I do. According to ResearchFish, it takes less than a lunchbreak to enter all the data. There are two problems with this:
1. Few academics that I know have the time to take a lunch break.
2. So far, today, it has taken me longer than that just to work out a strategy for completing this multi-dimensional task systematically. It's like 3-D Sudoku, but less fun.

Even for publications, it's a two-dimensional task: select publications (e.g., from ORCID) and select grants to which they apply. But if you just do this as stated, then you get many-to-many relationships, with every publication assigned to grants that it isn't associated with as well as one(s) it is. And yes, I have tested this. So you have to decide which grant you're going to focus on, then go through the list and add those... then go around the loop (add new publications > select ORCID > search > select publications > select grant) repeatedly for all grants. Maybe there's a faster way to do it, but I haven't discovered that yet. Oh: and if you make a mistake, there isn't an easy way to correct it, so there is probably over-reporting as well as under-reporting on many grants.
I'm still trying to guess what "author not available" means in the information about a publication. My strategy for working out which paper each line refers to has been to keep Google Scholar open in parallel and search for the titles there, because those make more sense to me.

In the section on reporting key findings of a grant, when you save the entry, it returns you to the same page. Why would you want to save multiple times, rather than just moving on to the next step? Why isn't there a 'next' option? And why, when you have said there is no update on a completed grant, does it still take you to the update page? What was the point of the question?

When you're within the context of one award, and you select publications, it shows all publications for all awards (until you explicit select the option to focus on this award). Why? I'm in a particular task context...

When you're in the context of an award where you are simply a team member, you can filter by publications you've added, or by publications linked to this award, but not by publications that you've added that are also linked to this award. Those are the ones that I know about, and the ones that I want to check / update.

Having taken a coffee break, I returned to the interface to discover I had been logged out. I don't actually know my login details because the first time I logged in this morning I did so via ORCID. That option isn't available on the login page that appears after time-out. This is further evidence of poor system testing and non-existent user testing.

I could go on, but life is too short. There is no evidence of the developers having considered either conceptual design or task structures. There is no evidence that the system has actually been tested by real users who have real data entry tasks and time constraints. I really cannot comprehend how so many funders can mandate the use of a system that is so poorly designed, other than because they have the power to do so.

No comments:

Post a Comment