TY - JOUR
T1 - AI in the ED
T2 - Assessing the efficacy of GPT models vs. physicians in medical score calculation
AU - Haim, Gal Ben
AU - Braun, Adi
AU - Eden, Haggai
AU - Burshtein, Livnat
AU - Barash, Yiftach
AU - Irony, Avinoah
AU - Klang, Eyal
N1 - Publisher Copyright:
© 2024 Elsevier Inc.
PY - 2024/5
Y1 - 2024/5
N2 - Background and aims: Artificial Intelligence (AI) models like GPT-3.5 and GPT-4 have shown promise across various domains but remain underexplored in healthcare. Emergency Departments (ED) rely on established scoring systems, such as NIHSS and HEART score, to guide clinical decision-making. This study aims to evaluate the proficiency of GPT-3.5 and GPT-4 against experienced ED physicians in calculating five commonly used medical scores. Methods: This retrospective study analyzed data from 150 patients who visited the ED over one week. Both AI models and two human physicians were tasked with calculating scores for NIH Stroke Scale, Canadian Syncope Risk Score, Alvarado Score for Acute Appendicitis, Canadian CT Head Rule, and HEART Score. Cohen's Kappa statistic and AUC values were used to assess inter-rater agreement and predictive performance, respectively. Results: The highest level of agreement was observed between the human physicians (Kappa = 0.681), while GPT-4 also showed moderate to substantial agreement with them (Kappa values of 0.473 and 0.576). GPT-3.5 had the lowest agreement with human scorers. These results highlight the superior predictive performance of human expertise over the currently available automated systems for this specific medical outcome. Human physicians achieved a higher ROC-AUC on 3 of the 5 scores, but none of the differences were statistically significant. Conclusions: While AI models demonstrated some level of concordance with human expertise, they fell short in emulating the complex clinical judgments that physicians make. The study suggests that current AI models may serve as supplementary tools but are not ready to replace human expertise in high-stakes settings like the ED. Further research is needed to explore the capabilities and limitations of AI in emergency medicine.
AB - Background and aims: Artificial Intelligence (AI) models like GPT-3.5 and GPT-4 have shown promise across various domains but remain underexplored in healthcare. Emergency Departments (ED) rely on established scoring systems, such as NIHSS and HEART score, to guide clinical decision-making. This study aims to evaluate the proficiency of GPT-3.5 and GPT-4 against experienced ED physicians in calculating five commonly used medical scores. Methods: This retrospective study analyzed data from 150 patients who visited the ED over one week. Both AI models and two human physicians were tasked with calculating scores for NIH Stroke Scale, Canadian Syncope Risk Score, Alvarado Score for Acute Appendicitis, Canadian CT Head Rule, and HEART Score. Cohen's Kappa statistic and AUC values were used to assess inter-rater agreement and predictive performance, respectively. Results: The highest level of agreement was observed between the human physicians (Kappa = 0.681), while GPT-4 also showed moderate to substantial agreement with them (Kappa values of 0.473 and 0.576). GPT-3.5 had the lowest agreement with human scorers. These results highlight the superior predictive performance of human expertise over the currently available automated systems for this specific medical outcome. Human physicians achieved a higher ROC-AUC on 3 of the 5 scores, but none of the differences were statistically significant. Conclusions: While AI models demonstrated some level of concordance with human expertise, they fell short in emulating the complex clinical judgments that physicians make. The study suggests that current AI models may serve as supplementary tools but are not ready to replace human expertise in high-stakes settings like the ED. Further research is needed to explore the capabilities and limitations of AI in emergency medicine.
UR - https://www.scopus.com/pages/publications/85187010997
U2 - 10.1016/j.ajem.2024.02.016
DO - 10.1016/j.ajem.2024.02.016
M3 - Article
C2 - 38447503
AN - SCOPUS:85187010997
SN - 0735-6757
VL - 79
SP - 161
EP - 166
JO - American Journal of Emergency Medicine
JF - American Journal of Emergency Medicine
ER -