How are healthcare professionals using generative AI?

Generative tools like ChatGPT, Claude and Bard have been around for almost a year now, and the dust is settling. People are learning how best to utilise these tools to make working life a bit easier.

Knowledge and Library Specialists (KLS) are seeing how they can be used to generate search strategies, marketing materials, as well as lending a helping hand to synthesise information, and developing prompts to limit hallucination, or misleading AI responses.

Learning how healthcare professionals are using new technologies can help inform and shape my own practice, so I’ve been looking into how folks are using, and testing, different generative AIs and LLMs (large language models).

After numerous conversations with KLS colleagues and healthcare professionals (and a couple of evidence summaries!), I came up with a list of current uses and trends. While I won’t list everything I’ve stumbled across over the past couple of months as we’d be here all week, I’ll provide an outline.

People are using generative AI in many different ways, and it’s fascinating to read about!  Of particular note, there are numerous studies comparing GPT responses to healthcare student examination questions; I won’t be covering these here, but there are a notable number of papers using similar exercises to test the accuracy of information presented by GPT and other tools.

Generally speaking, GPT in particular seemed to score reasonably, with some variation across different specialties.

CPD and education

Generating essays, or paragraphs in essays, was a noted concern among some healthcare educators. However, some have argued that students need to be encouraged to develop their critical thinking and analysis skills, to reduce the risk of bias and hallucination in generative AI responses.

It has also been noted that in the near future, more tailored and personalised educational resources could be made available; providing knowledge to students to suit their contextual needs at any given moment.

Simulating patient conversations and consultations may also be a promising future use of this technology, judging from recent tests.

I simulated a few Knowledge Management tools by getting GPT to facilitate 1-1 Before Action Reviews, and After Action Reviews; I was able to later generate a report of the reviews to reflect on.

Research assistance

Many publishers and authorship guidelines do not allow authors to credit AI tools with authorship, due to concerns around research transparency and author accountability. Some researchers use generative AI to assemble a first draft of paragraphs or papers intended for publication.

However, some researchers have tentatively noted that generative AI could assist with synthesising and summarising complex topics for various audiences, as well as partially automating systematic reviews (a concept which has been around for many years).

Generative AI can assist with the generation of search strategies, research ideas and questions, and to some extent, analysing data.

Clinical decision making

Some tools have been tested or developed to assist with tasks around diagnostic support, although early studies indicate that these tools require much further refinement!

Generally, tools seem reasonably accurate, again with variations depending on the topic; falling short of human clinical reasoning. However, a notable theme across different papers (and conversations) seems to indicate that over time these tools will get more accurate.

However, in the studies I’ve come across note that human judgment, validation, and responsible oversight are crucial to realise this potential and avoid risks.

Communicating with service users and patients

Clinicians have been testing how useful generative AI is for reporting purposes;  from providing personalised reports to clinicians, as well as tailored reports for patients and service users. And as touched on previously, some students have been using generative AI to simulate conversations with patients as practice, to sharpen their communication skills.

Language barriers between healthcare professionals and service users may be somewhat reduced, with generative AI being able to translate English written material into other languages, although some tools may miss out on certain linguistic subtleties.

Generative AI may have some promise in the telehealth arena, with generative AI potentially assisting with patient communication around smoking cessation or diet and exercise improvements.

Other uses and challenges

The usual ‘streamlining’ of administrative processes has been touched upon in some papers, as well as the drafting of healthcare policies.

There are accuracy, accountability, transparency and ethical limitations that require continued research and caution before GPT, Bing and other more specialised tools can be responsibly used more widely in healthcare settings.

Ethical challenges in particular are present in many conversations and papers; from the risk of perpetuating bias and exacerbating current inequalities, to the promotion of misinformation, to over-reliance on tools we may not fully understand, to questions around patient and service user privacy.

Generative AI has its uses, as long as it is used in an informed and evidence-based way.

The present and future

It’s clear that healthcare professionals and researchers are using generative AI, and testing its capabilities. And it’s likely that in the future, we’ll see more specialised tools catered to healthcare professionals, and knowledge and library professionals.

I think it’s fair to say that the marketing hype of these tools is quietening, allowing for constructive conversations about how generative AI can be used productively and safely, and how they can be developed further.

Many questions remain for the future; such as what patients and service users think and feel about AI, and if they use tools like ChatGPT and Bing to answer health-related queries. There are also questions around the changing search and information seeking behaviours of KLS staff and healthcare professionals to consider, and the ever-increasing importance of AI literacy.  

HW

Ms Hannah Wood

She/Her

Knowledge Specialist

Health Education England