Working with Academics and the COPAC Recommender

Over the past month, after compiling a list of 132 potential candidates, I’ve been working with fourteen academics in representative disciplines within the Humanities from around the UK to test and give feedback on the new Copac recommender.  Those individual interviews are now complete, and I am starting to put together a synthesis report on what they’ve told me so far, and it’s all quite heartening.

For one, as you will no doubt like to hear, is that the recommender works!  For the majority of the searches, it returned legitimate and useful suggestions, which the academics said were definitely important and could be used to develop reading lists: “quite a few of these or so are just spot on” was a phrase a number of them said, as well as “I know these books and authors; this is what I would hope would come up”.  Others also found the recommender could be applied to their own professional research: “I knew about this book, but I’d forgotten about it, so this is a good memory jogger, since that would be something I need to consult again”.  One academic during the testing found a book he hadn’t heard of before, and judging from the description and the contents, thought it likely that he would consult it for his own current project.

In terms of the actual recommendations for a reading list, we heard comments such as “this is a seriously useful addition to Copac… it’s great for a reading list developer but this is a major advantage for students”, illustrating that the recommender could also support undergraduate research, as well as developing lists for modules and courses.

To qualify, however, not all searches returned such useful content, and we knew it wouldn’t.  Some recommendations were considered to be tangential or too general, and that is partly because of the nature of the algorithm used, which groups searches with other searches; thus, not all suggestions are going to be as weighty as others.  However, there is something to be said about serendipitous research in that sense: we often find that researchers, themselves, practice a non-linear approach, particularly near the beginning of their search on a library catalogue or database, allowing those accidents and peregrinations to guide rather than hinder.  To that end, one lecturer pointed out that “the editions and recommendations offered some interesting trails” emphasising that the circuitous still has its place in the world of research.

In other instances, some searches produced no results at all.  This result could be that the searches were potentially just too obscure.  It is the reality of an integrated library system like Copac that the majority of its users will not have taken out those same specialised books in relation to other books.  This result could also still be connected to the glitch we discovered with the ISBNs, which were discussed in the post “Data Loading, Processing, and More Challenges with ISBNs – a Technical Update”: “In terms of the ISBN issue, we found our problem was not so much that we have duplicates appearing, but that when we implement it into Copac many results did not have recommendations at all – quite simply because we couldn’t easily match works with the same ISBN to one another.”  As with all testing, the discovery of glitches and bugs is advantageous, since these kinds of problems can only be discovered by those using the tool.

In terms of how the recommender functions for the end user (not the technical end), testers said that the system “works the way you’d expect it to work” which is important to highlight: we wanted to ensure that teachers and students recognised it as a useful tool and be familiar with its basic functions from the outset, and not need any kind of training.   Another said “from a personal perspective, it is a very good exploration tool” and that the “mixture of suggestions is very interesting but their usefulness would depend on the kind of course I’d be developing.”  That comment is also important, because the recommender is not meant to replace or surpass the expertise of the researcher, particularly with regard to reading lists.  It is a tool, pure and simple, and we understand that most academics will still have their own methods for choosing.  However, if the recommender can give them titles, which they had possibly forgotten about or wouldn’t have thought about in relation to their searches before, then it is indeed valuable.  One lecturer also said that the recommender offered “a useful method for browsing” and that could possibly work with developing improved research skills and information literacy.

All in all, the testing showed us that academics do appreciate the recommender and see it as valuable to putting together reading lists, as well as aiding with undergraduates in their own research; the other valuable point I discovered in our favour was that none immediately saw the recommender as “just like Amazon’s”: to the person, they all saw it as much more refined and suited to their work as teachers and researchers.