Evaluating the recommender

While our techies, Dave and Neeta, are working away to crunch the data we’ve pulled in from all of our partners, the other half of the Copac AD team are looking at the methods we’ll be using to evaluate the recommender.

We had some interesting user evaluations from the previous SALT project, where we spoke to postgraduates at The University of Manchester to find out if the recommender would support their research by surfacing valuable materials that might, otherwise, have been unknown to them.  There was overwhelming support for the concept from the groups we spoke to, so we’re interested to find out how other users will respond to the recommender.

This time, we’re planning focus groups with undergraduate students to find out if the recommendations we generate will help them to find course materials.  We think that students, already familiar with the concept from the likes of Amazon and Spotify, will welcome the Copac AD recommender so we’ll be showing the groups our prototype and asking what they think.

We’ll also be interviewing academics and teachers to find out if recommendations could support the development of course reading lists.  Some of the postgraduate researchers from the SALT focus groups were also graduate teaching assistants, and they told us that they could see how recommendations from the library catalogue could help their students to read more material and move beyond reading lists – we’d like to explore this more widely with academics from a range of institutions.

And finally, we’ll be conducting interviews with academic liaison librarians to ask if the recommender could support their work with collection development.  I was a liaison librarian in a previous life, and I can definitely see the potential of the tool – especially for gathering recommendations for stock purchases – but again we’ll be speaking to librarians from our project partner institutions to get their views.

We’ll be publishing our evaluation reports on this blog along with our testing instruments over the next couple of months.

What do the library users think?

As the SALT project and the Activity Data programme progresses, I’m finding the results of the various user engagement exercises really interesting.  As Janine’s already mentioned, we’re planning a structured user evaluation of our recommender tool with subject librarians and researchers, but before that we wanted to talk to some students to test some of our assumptions and understand library users experiences a little better.

So, last week I took myself off to the JRUL and interviewed four students (three postgraduates and one undergraduate).  In the main, I was (of course) interested in their opinions about recommenders, and whether they would find such a tool useful in the JRUL library catalogue and in Copac.  There is a lot of evidence to suggest that researchers would find the introduction of a recommender beneficial (not least from the other blogs on this programme), but what would the Manchester students and researchers think?  I was also interested in their research behaviour – did they always know exactly what they were looking for, or did they do subject and keyword searches?  And finally, I wanted to sound them out about privacy.

So what did they tell me?

On recommendations

There was varied use of recommenders through services like Amazon, but all of the students could see the potential of getting recommendations through the library catalogue, Copac, and eventually through the Library Search (powered by Primo).  There were some concerns about distractions, with one student worried that she would spend her limited research time following a never ending cycle of recommendations that took her further and further away from her original purpose.  However, the biggest concerns from all four was the possibility of irrelevant material being pushed to them – something that they would all find frustrating.  A recommender could certainly help to widen reading choices, but all of them wanted to know how we were going to make sure that the suggestions were relevant.  I noticed that the postgraduate focus group participants in the Rise focus groups needed to trust the information, and were interested to know where the recommendation has come from.  It’s clear that trust is a big issue, and this is something we’ll definitely be re-visiting when we run the user evaluation workshops.

On research behaviour

On the whole, the participants knew what they were looking for when they opened the catalogue, and suggestions of material came from the usual suspects – supervisors, tutors, citations, or specific authors they needed to read.  All of them felt that recommendations would be interesting and especially useful during extended research projects such as dissertations.  However, what was most interesting to me was that, although they all said they would be interested to look at the suggestions, they all seemed unconvinced they would actually borrow the recommended books because they, on the whole, visited the catalogue in order to find specific items.  So what does this mean for our hypothesis – that using circulation data can support research by surfacing underused library materials?  These students didn’t have the opportunity to try the recommender, so you could argue that some scepticism is inevitable, and Hudderfield’s experience suggests that underused books will resurface.  However, again we need to explore this further once we can show some students a working prototype.

On privacy

I wasn’t sure whether privacy would be an issue, but none of the students I spoke to had any concerns about the library collecting circulation data and using it to power recommendations.  They considered this to be a good use of the data, as long as anonymity was taken into consideration.  On the whole, the students’ responses backed up the findings of the 2008 Developing Personalisation for the Information Environment Final Report, which found that “students place their trust in their college or university. They either think that they have no alternative to disclosing information to their institution, or believe that the institution will not misuse the information.”  They felt that, by introducing a recommender, the library was doing “a good thing” by trying to improve their search experience.  No concerns here.

Next Steps

Obviously, this was only the views of four students, and we need to do more work to test the usefulness of the tool.  We’re now planning the user testing and evaluation of the recommender prototype, and recruiting postgraduate humanities researchers to take part.  As Janine outlined, we’ll be introducing the tool to subject librarians at JRUL and humanities researchers to see if the recommendations are meaningful and useful.

I’m looking forward to finding out what they think, and we’ll let you know the results in a later blog post.