LLM-Based Chatbot to Reduce Mental Illness Stigma in Healthcare Providers

C. Mahony Reategui-Rivera, Aref Smiley, Joseph Finkelstein

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Mental illness stigma in healthcare providers negatively impacts patient care and well-being, yet existing interventions to mitigate this stigma are often resource-intensive and difficult to scale. This paper presents the design and evaluation of a conversational agent (CA) named Stigma Educational Bot 1.0, powered by a large language model (LLM), specifically GPT-4. The CA aims to reduce stigma among healthcare providers by delivering an educational program grounded in anti-stigma principles and behavior change theories across four modules. The CA was developed using the GPT Editor interface, incorporating tailored instructions and a curated knowledge base from evidence-based resources, including materials from the Pan American Health Organization. Its performance was evaluated through four key tasks: direct question answering, generation of case scenarios, identification of stigma in simulated clinical interactions, and generation of empathetic testimonies. Expert evaluators assessed the CA's outputs using a 5-point Likert scale and performance metrics such as precision and recall. Results indicate that the CA excels in delivering structured educational content on topics like 'Conditions' (scores of 5.0) but shows limitations in foundational concepts like 'Definition' (scores as low as 2.5). It demonstrated high consistency in language and style when generating case scenarios and testimonies, with perceived empathy scores up to 4.5. However, the CA exhibited moderate performance in identifying stigmatizing behaviors, with F1 scores ranging from 0.48 to 0.54 and lower recall rates, particularly for overt manifestations of stigma. The study highlights the potential of GPT-based conversational agents as scalable tools for stigma reduction among healthcare providers by offering accessible and interactive educational interventions. Limitations include reliance on simulated tasks, specific training materials, and moderate performance in stigma detection. Future work should focus on enhancing foundational knowledge delivery, improving the identification of overt stigmatizing behaviors, and assessing real-world impact.

Original languageEnglish
Title of host publication2025 IEEE 15th Annual Computing and Communication Workshop and Conference, CCWC 2025
EditorsRajashree Paul, Arpita Kundu
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-7
Number of pages7
ISBN (Electronic)9798331507695
DOIs
StatePublished - 2025
Externally publishedYes
Event15th IEEE Annual Computing and Communication Workshop and Conference, CCWC 2025 - Las Vegas, United States
Duration: 6 Jan 20258 Jan 2025

Publication series

Name2025 IEEE 15th Annual Computing and Communication Workshop and Conference, CCWC 2025

Conference

Conference15th IEEE Annual Computing and Communication Workshop and Conference, CCWC 2025
Country/TerritoryUnited States
CityLas Vegas
Period6/01/258/01/25

Keywords

  • Conversational agent
  • educational interventions
  • GPT-4
  • large language models
  • mental illness stigma

Fingerprint

Dive into the research topics of 'LLM-Based Chatbot to Reduce Mental Illness Stigma in Healthcare Providers'. Together they form a unique fingerprint.

Cite this