Advancing radiology practice and research: harnessing the potential of large language models amidst imperfections

Eyal Klang, Lee Alper, Vera Sorin, Yiftach Barash, Girish N. Nadkarni, Eyal Zimlichman

Research output: Contribution to journalArticlepeer-review

Abstract

Large language models (LLMs) are transforming the field of natural language processing (NLP). These models offer opportunities for radiologists to make a meaningful impact in their field. NLP is a part of artificial intelligence (AI) that uses computer algorithms to study and understand text data. Recent advances in NLP include the Attention mechanism and the Transformer architecture. Transformer-based LLMs, such as GPT-4 and Gemini, are trained on massive amounts of data and generate human-like text. They are ideal for analysing large text data in academic research and clinical practice in radiology. Despite their promise, LLMs have limitations, including their dependency on the diversity and quality of their training data and the potential for false outputs. Albeit these limitations, the use of LLMs in radiology holds promise and is gaining momentum. By embracing the potential of LLMs, radiologists can gain valuable insights and improve the efficiency of their work. This can ultimately lead to improved patient care.

Original languageEnglish
Article numbertzae022
JournalBJR Open
Volume6
Issue number1
DOIs
StatePublished - 1 Jan 2024

Keywords

  • artificial intelligence
  • ChatGPT
  • large language models
  • natural language processing

Fingerprint

Dive into the research topics of 'Advancing radiology practice and research: harnessing the potential of large language models amidst imperfections'. Together they form a unique fingerprint.

Cite this