TY - GEN
T1 - LLM-Based Chatbot to Reduce Mental Illness Stigma in Healthcare Providers
AU - Reategui-Rivera, C. Mahony
AU - Smiley, Aref
AU - Finkelstein, Joseph
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Mental illness stigma in healthcare providers negatively impacts patient care and well-being, yet existing interventions to mitigate this stigma are often resource-intensive and difficult to scale. This paper presents the design and evaluation of a conversational agent (CA) named Stigma Educational Bot 1.0, powered by a large language model (LLM), specifically GPT-4. The CA aims to reduce stigma among healthcare providers by delivering an educational program grounded in anti-stigma principles and behavior change theories across four modules. The CA was developed using the GPT Editor interface, incorporating tailored instructions and a curated knowledge base from evidence-based resources, including materials from the Pan American Health Organization. Its performance was evaluated through four key tasks: direct question answering, generation of case scenarios, identification of stigma in simulated clinical interactions, and generation of empathetic testimonies. Expert evaluators assessed the CA's outputs using a 5-point Likert scale and performance metrics such as precision and recall. Results indicate that the CA excels in delivering structured educational content on topics like 'Conditions' (scores of 5.0) but shows limitations in foundational concepts like 'Definition' (scores as low as 2.5). It demonstrated high consistency in language and style when generating case scenarios and testimonies, with perceived empathy scores up to 4.5. However, the CA exhibited moderate performance in identifying stigmatizing behaviors, with F1 scores ranging from 0.48 to 0.54 and lower recall rates, particularly for overt manifestations of stigma. The study highlights the potential of GPT-based conversational agents as scalable tools for stigma reduction among healthcare providers by offering accessible and interactive educational interventions. Limitations include reliance on simulated tasks, specific training materials, and moderate performance in stigma detection. Future work should focus on enhancing foundational knowledge delivery, improving the identification of overt stigmatizing behaviors, and assessing real-world impact.
AB - Mental illness stigma in healthcare providers negatively impacts patient care and well-being, yet existing interventions to mitigate this stigma are often resource-intensive and difficult to scale. This paper presents the design and evaluation of a conversational agent (CA) named Stigma Educational Bot 1.0, powered by a large language model (LLM), specifically GPT-4. The CA aims to reduce stigma among healthcare providers by delivering an educational program grounded in anti-stigma principles and behavior change theories across four modules. The CA was developed using the GPT Editor interface, incorporating tailored instructions and a curated knowledge base from evidence-based resources, including materials from the Pan American Health Organization. Its performance was evaluated through four key tasks: direct question answering, generation of case scenarios, identification of stigma in simulated clinical interactions, and generation of empathetic testimonies. Expert evaluators assessed the CA's outputs using a 5-point Likert scale and performance metrics such as precision and recall. Results indicate that the CA excels in delivering structured educational content on topics like 'Conditions' (scores of 5.0) but shows limitations in foundational concepts like 'Definition' (scores as low as 2.5). It demonstrated high consistency in language and style when generating case scenarios and testimonies, with perceived empathy scores up to 4.5. However, the CA exhibited moderate performance in identifying stigmatizing behaviors, with F1 scores ranging from 0.48 to 0.54 and lower recall rates, particularly for overt manifestations of stigma. The study highlights the potential of GPT-based conversational agents as scalable tools for stigma reduction among healthcare providers by offering accessible and interactive educational interventions. Limitations include reliance on simulated tasks, specific training materials, and moderate performance in stigma detection. Future work should focus on enhancing foundational knowledge delivery, improving the identification of overt stigmatizing behaviors, and assessing real-world impact.
KW - Conversational agent
KW - educational interventions
KW - GPT-4
KW - large language models
KW - mental illness stigma
UR - http://www.scopus.com/inward/record.url?scp=105001145657&partnerID=8YFLogxK
U2 - 10.1109/CCWC62904.2025.10903778
DO - 10.1109/CCWC62904.2025.10903778
M3 - Conference contribution
AN - SCOPUS:105001145657
T3 - 2025 IEEE 15th Annual Computing and Communication Workshop and Conference, CCWC 2025
SP - 1
EP - 7
BT - 2025 IEEE 15th Annual Computing and Communication Workshop and Conference, CCWC 2025
A2 - Paul, Rajashree
A2 - Kundu, Arpita
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 15th IEEE Annual Computing and Communication Workshop and Conference, CCWC 2025
Y2 - 6 January 2025 through 8 January 2025
ER -