Resources and tools to enable knowledge and library services staff to understand and use AI.

Overview of the topic

An expansive topic, Artificial Intelligence (AI) is software which can reflect limited human intelligence and perform tasks accordingly, often using vast quantities of data. For example, generating human-like text, or identifying and acting on patterns in images. There does not appear to be a universal definition of Artificial Intelligence, and the term can be used erroneously. 

Increasingly, artificially intelligent tools are being used to assist with research, education, and informing service user and patient care. It can also assist with developing policy, business cases, and enhancing communication with service users and patients. There are many different tools and applications.

With a growing population, there will be a greater need for more efficient use of digital resources; AI tools may assist with burdensome tasks and save healthcare organisations money.

Below is a list of some AI types:

Imaging and video analytics

Partially automated image interpretation could quicken the process of detecting and diagnosing health concerns.

Predictive analytics

Processing vast quantities of information from real-time data monitoring, predictive analytical tools may spot disease before it is noticeable to clinicians.

These tools can assist with the prevention of disease and highlight potential risks in treatment options.

Robotic process automation

Software which can perform structured tasks using other software.

Bias and other risks

Academics, clinicians, and politicians have raised concerns of bias in AI tools. There is a risk of propagating and automating existing biases.

AI tools may be at risk of exacerbating bias against people with an increased risk of harm; people who have been historically marginalised or subject to discrimination. Bias could cause harm to service users, such as underdiagnosing, or indeed overdiagnosing, vulnerable people.

Some GPT detectors, tools which check academic work for generative AI, have been found to be ineffective. This can result in false-positives; accusing students or academics of using generative AI when they have not. One study indicated that some GPT detectors are biased against non-native English writers, putting them at increased risk of being unfairly accused of using generative AI to cheat.

Mitigating and reducing bias could involve questioning companies that offer AI products; ensuring that their development teams are adequately diverse and well-treated, that they’re training tools with diverse data, and that their products are thoroughly tested for quality.

Inadequate data collected by hospitals may not be compatible with some AI tools, leading to poor or inaccurate results. Good data integrity is vital for the safe use of AI tools.

Other risks have also been identified:

  • unintentional plagiarism and copyright infringement
  • harmful to the environment
  • data privacy
  • security concerns
  • incorrect, misleading or inaccurate information
  • citation/reference inaccuracy
  • transparency and trustworthiness issues
  • ‘legal issues’

Large Language Models (LLMs) and healthcare

Famous and accessible products like ChatGPT, Bard, and Bing Chat are Large Language Models, although there are numerous other LLMs too. Language Models excel at language-based tasks, such as assisting with the creation of simple Boolean search strategies, drafting strategy and targeted marketing copy, and synthesising/summarising information.

While this list isn’t exhaustive, healthcare professionals may use LLMs for:

  • information retrieval
  • self-learning and further education
  • patient conversation simulations
  • patient information writing and health literacy support
  • article draft and evidence summary generation
  • research question generation
  • marketing
  • patient support, engagement and consultation
  • clinical decision support and point of care assistance
  • assisting with diagnosis
  • generation of clinical reports
  • explaining drug-drug interactions
  • policy drafting
  • streamlining repetitive administration tasks
  • drug lexicon/slang/synonym generation
  • guideline questions support

Hallucination and error

Hallucination is the presentation of information, usually by LLMs, which appears plausible but is erroneous (in other words, tools can make things up). LLM tools are generally programmed to respond to user input regardless of the prompt given, and sometimes this leads to generated responses which are incorrect.

Some LLMs cannot search the internet and may not be able to accurately provide answers to questions. Differentiating between tools which can provide answers to questions, and tools which cannot, can be important. Selecting the right tool for the right job is vital to ensuring proper and effective use of AI tools.

While hallucination is currently being investigated and mitigated by tech companies, developers are encouraging users to be vigilant and double-check responses.

Having a good understanding of LLM tools, and how to use them effectively, can cut down on hallucination. Asking LLM tools highly specific questions, even tools with internet search capabilities, can lead to erroneous responses.

Practical examples from Knowledge and Library Services

From making data more discoverable to rolling out LibKey Nomad to using NLP products in search, KLS professionals are already using AI tools to enhance their services.

Some Knowledge Specialists are also using Large Language Models to assist with building search strategies.

Others are training and advising healthcare professionals and other KLS colleagues about how to use generative AI safely and effectively.

Certain tools, like Claude and Perplexity, can be used to assist with the summarising of literature search results. Be careful not to include any personally identifying information, or paywalled information, in documents.

Prompting generative AI

Knowing how to prompt generative AI tools can help provide richer and more useful responses. Differentiating between various tools and their uses is also important; making sure that we’re using the right tools for the right job.

Not all AI tools can search the internet and provide an accurate answer to a question. Most large language models are better suited to generating language, for language-based tasks, such as generating synonyms for literature searching, or summarising the information you give them.

Setting up a good prompt can take a little practice, time and patience. Everyone has their own unique prompting style, much like we have unique searching styles. The CLEAR framework can help you write up new prompts.  

Setting up dedicated threads for specific tasks can save you time. You can set up a prompt, and ‘dip into’ threads, without having to re-write prompts every single time.

Here are some examples:

Google search strategy generation

“Hello! I am a search expert. I would like you to help me generate search strategies for Google, using Boolean AND/OR terms only. If I give you the search queries, can you generate the search strategy, using UK English [and US English]? No need to explain the strategy itself, I am an expert and I already understand how the search works.”

Using this prompt in ChatGPT or Bard will allow you to simply paste in search queries in the future, and the tool will generate search strategies each time.

As Google has Natural Language Processing algorithms, it will be more ‘forgiving’ of search strategies generated by LLMs.   

Search block generation

“Hello! I am a Knowledge Specialist. I work in the UK NHS. My job is searching for evidence-based information. I would like to use this thread to generate search blocks for advanced search databases. Please do not use Medical Subject Headings (MeSH) or any other search operators, just 'OR' and 'AND'. When I paste in medical topics, please generate the search strategies using both UK and US spelling, and various relevant synonyms. No need to explain the strategy, as I am an expert and do not require this information.”

Use this prompt for search strategies for advanced search databases.

Summarising

Some tools, like Claude, will allow you to upload documents to summarise. Uploading anonymised literature search results, or RefWorks bibliographies, will make things easier. Always prompt the tools to use UK English in their responses.

Don’t ask the tool to simply ‘summarise’ the information. The response will lack detail and the important information you have found! Instead, ask it questions about your search. Here’s an example:

“Drawing purely from the information in this document, please list the benefits of having a Knowledge Management Service in healthcare organisations, using bullet points and UK English.”

Asking it relevant questions will draw richer responses, and a more detailed summary. Remember to reference all the materials in your summary, and double check the information before sending to your users. It’s your responsibility to use these tools appropriately!

For more detailed information about using AI tools to summarise information, check out this blog.

Networks, courses and online learning

AI for Healthcare: Equipping the Workforce for Digital Transformation

e-learning on how artificial intelligence is transforming healthcare and how it can be used to support change in the healthcare workforce.

PGCert Clinical Data Science

This Clinical Data Science course from the University of Manchester was co-created with end users and industry partners to develop a flexible programme suitable for busy health and social care practitioners.

Federation for Informatics Professionals (FEDIP)

Members can get access to an information Hub, as well as other opportunities to network.

Current and Emerging Technology in Knowledge and Library Services

A community of practice for Knowledge and Library workers who are interested in emerging technologies and their practical uses. There is also a bank of resources for using AI and AI training materials for Knowledge and Library Service users.

Artificial Intelligence in Teaching and Learning

This subject guide will help students and staff explore a range of AI technologies and consider how these technologies might affect their teaching and learning practice.

Page last reviewed: 10 January 2024