Short and sweet: summarising literature searches with generative AI
Using AI to produce the summary for literaure searches.
Since generative AI tools hit the headlines, people have been learning how best to use them, from healthcare workers to Knowledge and Library staff.
Getting these tools to work for you can be a bit tricky, and with the plethora of different tools currently available, it can be hard to know what tools are fit for the job.
Through my own exploration, I’ve found that certain AI tools are useful for summarising and information for literature searches. Large Language Models (LLMs) are usually great at language-based tasks.
It’s not a case of asking tools to generate summaries of their own accord. They might not include the information you have found during a search, and they might make stuff up. Certain tools, like Humata, Perplexity, and Claude, allow you to upload documents to summarise. I find it useful to upload RefWorks bibliographies and prompt from there.
Other tools, like GPT-3.5, GPT-4, and Bard, are great at summarising or synthesising paragraphs of information.
In the future, Microsoft’s CoPilot in Word might revolutionise this process, so watch this space!
Cautions
These tools aren’t perfect though; like any tool, there are certain factors to consider when using them.
Privacy and data protection:
Some tools require the uploading of a document to summarise the information within; make sure that no personal, identifiable information remains in the document before uploading and read through the data protection and privacy statements of the product.
Copyright concerns
Avoid uploading or pasting material that is not publicly available, and always reference the source material appropriately; these tools won’t reference the source very often, if at all, and even if it does reference, there’s no guarantee that it’ll be accurate.
‘Hallucination’ and misleading content generation
Always carefully read through the generated summary, and make changes as needed. Hallucination can be minimised by prompting tools to only draw from the information that you provide, and to not add extra detail outside of the material you give it.
Bias
Both yourself and the tool may be selectively presenting ‘positive’ information. For example, the tool may generate a summary indicating that treatment option X may provide statistically significant and positive outcomes but may fail to generate information about any side-effects. Reading through the information in your search can help fill in any gaps left by the tool, or specify in your prompt for the tool to generate a ‘balanced’ summary.
Managing expectations
Summaries and syntheses don’t always include the full body of evidence encountered in a literature search. It might be useful to remind the requester to read through the information carefully for themselves.
Spelling
Most tools will use American English. Prompting them to use UK English will save a bit of time.
Even when keeping these things into account, generating summaries can still save a whole lot of time, compared with writing something out in full by hand.
Brief summaries
Brief summaries, which can simply outline the results and sum up a few recurring themes in a literature search, can be useful for presenting key papers in an accessible way. Of course, with a gentle note to the requester that they browse through all the material in the document.
You can upload search bibliographies into tools like Humata, Perplexity, and Claude and ask it to simply summarise the material, but it might not make for a great summary. I’ve found that simply asking the tool to draw relevant information, using the original search request, as a useful guide.
For example, let’s pretend that I was asked to conduct a search about the cost effectiveness of various types of food for chickens. Asking a generative AI tool to summarise the bibliography would inevitably lead it to generating a summary stating the obvious, that it’s a bibliography of information about chicken food. Not ideal!
Instead, I could ask it questions, getting it to draw solely from the information I provide it. What is the most cost-effective food for chickens? What food is the most nutritious? What food is a favourite among our fluffy, evolved dinosaurs?
By asking questions relevant to the search, rather than asking for a summary, we can create more robust content.
More in-depth summaries
Generative AI tools can also be useful for a more in-depth summary of information found in literature searches too. Comparing results from different papers and providing a more robust overview of the results of the search.
For more detailed summaries, I find using GPT-3.5 or GPT-4 to be useful. During a search, I keep a note of key recurring themes.
For example, if I ran a literature search around factors for retaining Knowledge and Library staff, it’s likely that there will be certain themes, such as positive work environments, CPD opportunities, and free cake.
These themes would make great subheadings. I could paste relevant lines from abstracts beneath these subheadings. The result would be these Frankenstein’s monster paragraphs, that probably make little sense! These paragraphs can be easily pasted into GPT, for GPT to reword accordingly.
All I need to do then is read through the paragraphs and reference the material accordingly.
To summarise
And yes, I am writing this bit by hand! So, in a nutshell, using generative AI tools to generate summaries can save time and add quality to literature searches. When we’re selecting and using these tools effectively, they can be pretty useful.