Evaluating the recommender – undergraduate focus groups

We held more focus groups over the summer holidays, which is kind of tricky when your audience for testing are undergraduates!  Not many are left on campus during the summer months, but we did manage to find some undergraduates still around the Manchester Metropolitan University (MMU) campus.  This might be a reflection of the heavy rainfall we’ve been having in Manchester this summer.

We chose MMU as we wanted to test the recommender on undergraduates who didn’t already have a recommender on their own library catalogue, as they might complete the tests as a comparison.   We spoke to 11 undergraduates in total and they tested the recommender 42 times between them.

The Results

The students were positive about the concept of the book recommender and were keen to use this as another tool in their armoury to find out about new resources.  A key bonus to them was the lack of input the recommender needed in order to gain maximum output. To a time poor, pressured undergraduate this is a huge plus point.

‘Yeah, I would use it, I don’t have to do anything’

‘I would always look at it if it was on the MMU library catalogue’.

The recommender also offered an alternative source of materials to the ubiquitous reading list.  This is absolutely crucial because it quickly became apparent that our participants struggled to find resources;

‘I go off the reading list, all those books are checked out anyway’

‘I’ve used Google scholar when I’ve had nowhere else to go but it returned stuff I couldn’t get hold of and that just frustrated me’.

So in theory it offered more substance to their reading lists.  The additional books it found were more likely to be in the library and came with the advantage that they were suggested because of student borrowing patterns.  Our respondents liked having this insider knowledge about what their peers have read in previous years.

‘It would be useful as I had to read a book for a topic area and when we got the topic area there were already 25 reservations on the book, so if I could click on something and see what the person who did this last year read, that would be very useful’.

Testing the prototype

In testing it proved difficult to conclude if the recommender was useful or not, as some testers seemed to have more luck then others in finding resources that were useful to them. Obviously within the data collection method some margin of error needs to be accounted for.

Of course, you could argue that whether a book is useful or not is a highly subjective decision.  One person’s wildcard might be another’s valuable and rare find, and whereas one tester might be searching for similar books others might be looking for tangentially linked books.   As an example of this, in our group, History students wanted old texts and Law students wanted new ones.

Positively, 91.4 % of the recommendations looked useful and only 3 searches returned nothing at all of any use to the user. 88.6% of searches generated at least one item that the user wanted to borrow and only 4 searches resulted in not a single thing that the user would borrow. Even with a deviation due to subjectivity these are compelling results. As the recommender requires the user not to submit anything substantial in order to get results, a low percentage returning nothing is acceptable to all users we interviewed.

Privacy concerns?

As in previous research, none of the undergraduates attending the focus groups expressed any concern in respect of privacy issues and they understood that the library collected circulation data and the results by the book recommender are generated by that circulation data.

‘I would benefit from everyone else’s borrowing as they are benefitting from mine, so I haven’t got a problem’.

‘It would be nice to be informed about it and given the option to opt out, but I don’t have a problem with it. No.’

Although a ‘nice to be asked’ was expressed by more than one attendee, they wouldn’t want this to delay the development of the book recommender.

In conclusion, the time poor, pressured student struggling to find books off the reading list still in the library would welcome another way of finding precious resources. The majority of students in our groups would use the recommender and, although some recommendations are better than others, they would be willing to forgive this if it just gave them one more resource, when the coursework deadline looms!

Working with Academics and the COPAC Recommender

Over the past month, after compiling a list of 132 potential candidates, I’ve been working with fourteen academics in representative disciplines within the Humanities from around the UK to test and give feedback on the new Copac recommender.  Those individual interviews are now complete, and I am starting to put together a synthesis report on what they’ve told me so far, and it’s all quite heartening.

For one, as you will no doubt like to hear, is that the recommender works!  For the majority of the searches, it returned legitimate and useful suggestions, which the academics said were definitely important and could be used to develop reading lists: “quite a few of these or so are just spot on” was a phrase a number of them said, as well as “I know these books and authors; this is what I would hope would come up”.  Others also found the recommender could be applied to their own professional research: “I knew about this book, but I’d forgotten about it, so this is a good memory jogger, since that would be something I need to consult again”.  One academic during the testing found a book he hadn’t heard of before, and judging from the description and the contents, thought it likely that he would consult it for his own current project.

In terms of the actual recommendations for a reading list, we heard comments such as “this is a seriously useful addition to Copac… it’s great for a reading list developer but this is a major advantage for students”, illustrating that the recommender could also support undergraduate research, as well as developing lists for modules and courses.

To qualify, however, not all searches returned such useful content, and we knew it wouldn’t.  Some recommendations were considered to be tangential or too general, and that is partly because of the nature of the algorithm used, which groups searches with other searches; thus, not all suggestions are going to be as weighty as others.  However, there is something to be said about serendipitous research in that sense: we often find that researchers, themselves, practice a non-linear approach, particularly near the beginning of their search on a library catalogue or database, allowing those accidents and peregrinations to guide rather than hinder.  To that end, one lecturer pointed out that “the editions and recommendations offered some interesting trails” emphasising that the circuitous still has its place in the world of research.

In other instances, some searches produced no results at all.  This result could be that the searches were potentially just too obscure.  It is the reality of an integrated library system like Copac that the majority of its users will not have taken out those same specialised books in relation to other books.  This result could also still be connected to the glitch we discovered with the ISBNs, which were discussed in the post “Data Loading, Processing, and More Challenges with ISBNs – a Technical Update”: “In terms of the ISBN issue, we found our problem was not so much that we have duplicates appearing, but that when we implement it into Copac many results did not have recommendations at all – quite simply because we couldn’t easily match works with the same ISBN to one another.”  As with all testing, the discovery of glitches and bugs is advantageous, since these kinds of problems can only be discovered by those using the tool.

In terms of how the recommender functions for the end user (not the technical end), testers said that the system “works the way you’d expect it to work” which is important to highlight: we wanted to ensure that teachers and students recognised it as a useful tool and be familiar with its basic functions from the outset, and not need any kind of training.   Another said “from a personal perspective, it is a very good exploration tool” and that the “mixture of suggestions is very interesting but their usefulness would depend on the kind of course I’d be developing.”  That comment is also important, because the recommender is not meant to replace or surpass the expertise of the researcher, particularly with regard to reading lists.  It is a tool, pure and simple, and we understand that most academics will still have their own methods for choosing.  However, if the recommender can give them titles, which they had possibly forgotten about or wouldn’t have thought about in relation to their searches before, then it is indeed valuable.  One lecturer also said that the recommender offered “a useful method for browsing” and that could possibly work with developing improved research skills and information literacy.

All in all, the testing showed us that academics do appreciate the recommender and see it as valuable to putting together reading lists, as well as aiding with undergraduates in their own research; the other valuable point I discovered in our favour was that none immediately saw the recommender as “just like Amazon’s”: to the person, they all saw it as much more refined and suited to their work as teachers and researchers.