TY - JOUR
T1 - Evaluating large language models on medical evidence summarization
AU - Tang, Liyan
AU - Sun, Zhaoyi
AU - Idnay, Betina
AU - Nestor, Jordan G.
AU - Soroush, Ali
AU - Elias, Pierre A.
AU - Xu, Ziyang
AU - Ding, Ying
AU - Durrett, Greg
AU - Rousseau, Justin F.
AU - Weng, Chunhua
AU - Peng, Yifan
N1 - Publisher Copyright:
© 2023, Springer Nature Limited.
PY - 2023/12
Y1 - 2023/12
N2 - Recent advances in large language models (LLMs) have demonstrated remarkable successes in zero- and few-shot performance on various downstream tasks, paving the way for applications in high-stakes domains. In this study, we systematically examine the capabilities and limitations of LLMs, specifically GPT-3.5 and ChatGPT, in performing zero-shot medical evidence summarization across six clinical domains. We conduct both automatic and human evaluations, covering several dimensions of summary quality. Our study demonstrates that automatic metrics often do not strongly correlate with the quality of summaries. Furthermore, informed by our human evaluations, we define a terminology of error types for medical evidence summarization. Our findings reveal that LLMs could be susceptible to generating factually inconsistent summaries and making overly convincing or uncertain statements, leading to potential harm due to misinformation. Moreover, we find that models struggle to identify the salient information and are more error-prone when summarizing over longer textual contexts.
AB - Recent advances in large language models (LLMs) have demonstrated remarkable successes in zero- and few-shot performance on various downstream tasks, paving the way for applications in high-stakes domains. In this study, we systematically examine the capabilities and limitations of LLMs, specifically GPT-3.5 and ChatGPT, in performing zero-shot medical evidence summarization across six clinical domains. We conduct both automatic and human evaluations, covering several dimensions of summary quality. Our study demonstrates that automatic metrics often do not strongly correlate with the quality of summaries. Furthermore, informed by our human evaluations, we define a terminology of error types for medical evidence summarization. Our findings reveal that LLMs could be susceptible to generating factually inconsistent summaries and making overly convincing or uncertain statements, leading to potential harm due to misinformation. Moreover, we find that models struggle to identify the salient information and are more error-prone when summarizing over longer textual contexts.
UR - https://www.scopus.com/pages/publications/85168711933
U2 - 10.1038/s41746-023-00896-7
DO - 10.1038/s41746-023-00896-7
M3 - Article
AN - SCOPUS:85168711933
SN - 2398-6352
VL - 6
JO - npj Digital Medicine
JF - npj Digital Medicine
IS - 1
M1 - 158
ER -