TY - GEN
T1 - Comparison of ACM and CLAMP for Entity Extraction in Clinical Notes
AU - Shah-Mohammadi, Fatemeh
AU - Cui, Wanting
AU - Finkelstein, Joseph
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Rapid increase in adoption of electronic health records in health care institutions has motivated the use of entity extraction tools to extract meaningful information from clinical notes with unstructured and narrative style. This paper investigates the performance of two such tools in automatic entity extraction. In specific, this work focuses on automatic medication extraction performance of Amazon Comprehend Medical (ACM) and Clinical Language Annotation, Modeling and Processing (CLAMP) toolkit using 2014 i2b2 NLP challenge dataset and its annotated medical entities. Recall, precision and F-score are used to evaluate the performance of the tools.Clinical Relevance - Majority of data in electronic health records (EHRs) are in the form of free text that features a gold mine of patient's information. While computerized applications in healthcare institutions as well as clinical research leverage structured data. As a result, information hidden in clinical free texts needs to be extracted and formatted as a structured data. This paper evaluates the performance of ACM and CLAMP in automatic entity extraction. The evaluation results show that CLAMP achieves an F-score of 91%, in comparison to an 87% F-score by ACM.
AB - Rapid increase in adoption of electronic health records in health care institutions has motivated the use of entity extraction tools to extract meaningful information from clinical notes with unstructured and narrative style. This paper investigates the performance of two such tools in automatic entity extraction. In specific, this work focuses on automatic medication extraction performance of Amazon Comprehend Medical (ACM) and Clinical Language Annotation, Modeling and Processing (CLAMP) toolkit using 2014 i2b2 NLP challenge dataset and its annotated medical entities. Recall, precision and F-score are used to evaluate the performance of the tools.Clinical Relevance - Majority of data in electronic health records (EHRs) are in the form of free text that features a gold mine of patient's information. While computerized applications in healthcare institutions as well as clinical research leverage structured data. As a result, information hidden in clinical free texts needs to be extracted and formatted as a structured data. This paper evaluates the performance of ACM and CLAMP in automatic entity extraction. The evaluation results show that CLAMP achieves an F-score of 91%, in comparison to an 87% F-score by ACM.
UR - http://www.scopus.com/inward/record.url?scp=85122519717&partnerID=8YFLogxK
U2 - 10.1109/EMBC46164.2021.9630611
DO - 10.1109/EMBC46164.2021.9630611
M3 - Conference contribution
C2 - 34891677
AN - SCOPUS:85122519717
T3 - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
SP - 1989
EP - 1992
BT - 43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2021
Y2 - 1 November 2021 through 5 November 2021
ER -