Researching the NHS Knowledge and Library Hub
About the user research on the use of the NHS Knowledge and Library Hub, conducted by Lagom Strategy.
The one year anniversary of the launch of the NHS Knowledge and Library Hub in late 2021 was a good point for some focused research on how it is being used.
We are keen to ensure that the tweaks and developments we make are grounded in evidence where possible. We monitor the literature on discovery tools and scrutinise the Hub analytics but both of these are fairly limited in scope so there is a need to build the evidence base we need.
To this end we commissioned Lagom Strategy to carry out research to help us answer some of our outstanding questions. We wanted to gain a better picture of the ways users are interacting with the Hub and support understanding that can underpin work across the network to engage users through training and promotion.
There were three main tools used to gather data:
- User experience survey (via pop up on the Hub)
- User interviews – in depth qualitative work
- Usability testing – screen sharing with users carrying out tasks
There was a very deliberate focus on the views of users (big thanks to all those who helped us recruit participants) since as information specialists we often have a rather different way of approaching search tasks than our users. The pool of participants consisted of a purposeful sample drawn from across the health professions.
The aim was to cover different aspects of users to collect the widest possible range of view points in what was a small group. Some participants ticked multiple boxes – for example an allied health professional, with a role in education, who is inexperienced with the Hub.
The user experience survey had the potential to generate more quantitative data alongside the in depth qualitative tools but saw a fairly small return (~50). We did see some library staff responses popping up in the survey and these were considered alongside the user data to avoid our preferences skewing the picture.
Why do users come to the Hub and how do they get on?
As we might hope there was evidence that users are regularly using the hub to support decision making and build their knowledge. There was mention of staying up to date and helping make the case in meetings. There was also support for using the Hub to find people working on a topic that they could then contact directly to ask about it – this might seem a useful thing to consider in how we support that step of the process.
It was sadly unsurprising to find that awareness of the Hub was not high. This is an enduring challenge and to be expected with a newish addition to that portfolio.
An area of criticism was the lack of onboarding when users arrive on the Hub. Users felt they were not using the Hub as well as they might expressing the desirability of a quick walk through and highlights of the key facilities. It is worth considering how we describe those key functions. My list would be the wide range of material searched quickly, the inclusion of local materials such as LMS / repository and the slick linking / requesting.
There were favourable comments on the speed of search and the ease of linking to full text. In some cases users will head to the Hub with a known item to give themselves the best chance of locating a copy in our collections. An Occupational Therapist illustrated this nicely “It’s as good as any search engine we use. It’s my go to one because I can get direct access to the article.”
There were also a good number of areas for improvement. Around half the time users found the search did not meet their expectations. We do not know how this compares to other search interfaces but that instinctively feels lower than we would like. Relevance was a concern as well as the quantity of materials returned.
The Hub interface was felt cluttered (something EBSCO are trying to address in their new user interface) and the right hand navigation (search widgets) was particularly unpopular. Needing to login was a bug bear. The need to login as soon as users start their search is a message to drive home at every opportunity to help reduce this area of dissatisfaction.
Usability
The usability testing saw users screen share as they carried out a series of tasks while the researcher observed and asked questions to clarify what they were doing. This produced rich data on where people got stuck but also the kinds of things they try and why.
We were aware of a potential issue around people entering their search terms in the wrong search box on library.nhs.uk (using “search this site” instead of the Hub). The research confirmed this as a source of confusion for users. This can be avoided by steering users through your local embedded version of the Hub or by directing them via your promotional URL.
A longer term solution is under consideration but particularly for new users it may be worth steering them to the right place.
How do users search?
Within our participants a range of search practices were visible. Simple keyword searching was the main method but there was awareness of the value of options like PICO. Users were able to reapply the search techniques they had been shown elsewhere but were honest enough to say this would not generally happen.
The filter model was well understood “Like lastminute.com”. Combining picking some strong keywords followed by the filters for date or source will likely be a helpful way to set users on their way searching the Hub.
We were curious if people would save searches but their focus was much more on how to share articles. Demonstrating how to find and share a permalink to an article record will therefore be appreciated.
What do users think they are searching?
The feedback here was that they were generally expecting a comprehensive search. The various databases listed in the widgets and on the bottom of the main search page do cause some confusion with users unclear if they are included or not. How we balance the desire for wide search against the potential for overwhelming results is something to consider carefully.
With a Google search the most reached for comparison we know people can cope with the very large numbers of results Google returns because they do not expect to be looking through that many of them. This is another driver for working on relevance.
The creation of the BMJ Best Practice placard reflected a desire to bring some content more readily into view and we hope to create a filter that brings more rapid access evidence to the top of the list reducing scrolling.
Comprehensive access to full text was a clearly expressed desired with users wishing everything they clicked could be immediately available. This is an ambitious aspiration we may struggle to deliver. In addressing this we need to note the current Hub full text filter has limitations as it does not include all of the thousands of hybrid open access journals that are identified via our Third Iron integration.
When searching the Hub the “Get PDF” links are added in real time drawing on the full knowledge base at Third Iron. We are working with EBSCO and Third Iron to seek ways to remedy this and if we can deliver this the full text filter will become a real winner for users. The speed with which we can deliver on our collective “Request this” offer has the potential to drive increased usage and positive user experience.
Access to help
The good news was that users are keen on the support we offer and recognise that we are subject matter experts in this area. The help on the Hub itself was not liked. Clicking on the help link on the navigation brought dismay “too much… too small”.
Since the research was carried out we have enhanced the links in the bottom branding of the main search page to make them more informative and we are talking to EBSCO about how help might work better on the new user interface.
Users reflected the diversity in how we all like to access support materials with voices in favour of simple guides but also for videos which show you what to do.
Are users happy?
The research include a snapshot of satisfaction scoring from 0 to 10 for likelihood of recommending the Hub. Users gave the Hub a net promoter score of 23 (calculated by subtracting the percentage of those scoring 0 to 6 from those scoring 9 or 10) which is a good start but with some distance to go to a maximum score of 100.
We will be collecting fresh data on this in future. As previously mentioned there was a mixed picture on how satisfied users were with the results of their search. There is scope for research to explore what they were expecting versus the reality and how to close the gap.
What next?
During this post you will hopefully have picked up areas that we have already been working on as a result of this research (and other engagement). The research has been discussed at the Knowledge and Library Hub Community of Practice and all are welcome to join in with the ongoing consideration (as well as finding the full text of the report from Lagom) by joining the CoP.