TY - JOUR
T1 - Lightweight transformers for clinical natural language processing
AU - ISARIC Clinical Characterisation Group
AU - Rohanian, Omid
AU - Nouriborji, Mohammadmahdi
AU - Jauncey, Hannah
AU - Kouchaki, Samaneh
AU - Nooralahzadeh, Farhad
AU - Clifton, Lei
AU - Merson, Laura
AU - Clifton, David A.
AU - Abbas, Ali
AU - Abdukahil, Sheryl Ann
AU - Abdulkadir, Nurul Najmee
AU - Abe, Ryuzo
AU - Abel, Laurent
AU - Abrous, Amal
AU - Absil, Lara
AU - Jabal, Kamal Abu
AU - Salah, Nashat Abu
AU - Acharya, Subhash
AU - Acker, Andrew
AU - Adachi, Shingo
AU - Adam, Elisabeth
AU - Adewhajah, Francisca
AU - Adriano, Enrico
AU - Adrião, Diana
AU - Al Ageel, Saleh
AU - Ahmed, Shakeel
AU - Aiello, Marina
AU - Ainscough, Kate
AU - Airlangga, Eka
AU - Aisa, Tharwat
AU - Hssain, Ali Ait
AU - Tamlihat, Younes Ait
AU - Akimoto, Takako
AU - Akmal, Ernita
AU - Al Qasim, Eman
AU - Alalqam, Razi
AU - Alameen, Aliya Mohammed
AU - Alberti, Angela
AU - Al-Dabbous, Tala
AU - Alegesan, Senthilkumar
AU - Alegre, Cynthia
AU - Alessi, Marta
AU - Alex, Beatrice
AU - Alexandre, Kévin
AU - Al-Fares, Abdulrahman
AU - Alfoudri, Huda
AU - Ali, Adam
AU - Ali, Imran
AU - Shah, Naseem Ali
AU - Alidjnou, Kazali Enagnon
N1 - Publisher Copyright:
© The Author(s), 2024.
PY - 2024
Y1 - 2024
N2 - Specialised pre-trained language models are becoming more frequent in Natural language Processing (NLP) since they can potentially outperform models trained on generic texts. BioBERT (Sanh et al., Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv: 1910.01108, 2019) and BioClinicalBERT (Alsentzer et al., Publicly available clinical bert embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pp. 72–78, 2019) are two examples of such models that have shown promise in medical NLP tasks. Many of these models are overparametrised and resource-intensive, but thanks to techniques like knowledge distillation, it is possible to create smaller versions that perform almost as well as their larger counterparts. In this work, we specifically focus on development of compact language models for processing clinical texts (i.e. progress notes, discharge summaries, etc). We developed a number of efficient lightweight clinical transformers using knowledge distillation and continual learning, with the number of parameters ranging from 15 million to 65 million. These models performed comparably to larger models such as BioBERT and ClinicalBioBERT and significantly outperformed other compact models trained on general or biomedical data. Our extensive evaluation was done across several standard datasets and covered a wide range of clinical text-mining tasks, including natural language inference, relation extraction, named entity recognition and sequence classification. To our knowledge, this is the first comprehensive study specifically focused on creating efficient and compact transformers for clinical NLP tasks. The models and code used in this study can be found on our Huggingface profile at https://huggingface.co/nlpie and Github page at https://github.com/nlpieresearch/Lightweight-Clinical-Transformers, respectively, promoting reproducibility of our results.
AB - Specialised pre-trained language models are becoming more frequent in Natural language Processing (NLP) since they can potentially outperform models trained on generic texts. BioBERT (Sanh et al., Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv: 1910.01108, 2019) and BioClinicalBERT (Alsentzer et al., Publicly available clinical bert embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pp. 72–78, 2019) are two examples of such models that have shown promise in medical NLP tasks. Many of these models are overparametrised and resource-intensive, but thanks to techniques like knowledge distillation, it is possible to create smaller versions that perform almost as well as their larger counterparts. In this work, we specifically focus on development of compact language models for processing clinical texts (i.e. progress notes, discharge summaries, etc). We developed a number of efficient lightweight clinical transformers using knowledge distillation and continual learning, with the number of parameters ranging from 15 million to 65 million. These models performed comparably to larger models such as BioBERT and ClinicalBioBERT and significantly outperformed other compact models trained on general or biomedical data. Our extensive evaluation was done across several standard datasets and covered a wide range of clinical text-mining tasks, including natural language inference, relation extraction, named entity recognition and sequence classification. To our knowledge, this is the first comprehensive study specifically focused on creating efficient and compact transformers for clinical NLP tasks. The models and code used in this study can be found on our Huggingface profile at https://huggingface.co/nlpie and Github page at https://github.com/nlpieresearch/Lightweight-Clinical-Transformers, respectively, promoting reproducibility of our results.
KW - Machine learning
KW - natural language processing for biomedical texts
UR - http://www.scopus.com/inward/record.url?scp=85183725247&partnerID=8YFLogxK
U2 - 10.1017/S1351324923000542
DO - 10.1017/S1351324923000542
M3 - Article
AN - SCOPUS:85183725247
SN - 1351-3249
VL - 30
SP - 887
EP - 914
JO - Natural Language Engineering
JF - Natural Language Engineering
IS - 5
ER -