CardioBERTpt: Transformer-based Models for Cardiology Language Representation in Portuguese

Nenhuma Miniatura disponível
Citações na Scopus
Tipo de produção
conferenceObject
Data de publicação
2023
Título da Revista
ISSN da Revista
Título do Volume
Editora
IEEE COMPUTER SOC
Autores
SCHNEIDER, Elisa Terumi Rubel
GUMIEL, Yohan Bonescki
SOUZA, Joao Vitor Andrioli de
MUKAI, Lilian Mie
OLIVEIRA, Lucas Emanuel Silva e
REBELO, Marina de Sa
GUTIERREZ, Marco Antonio
KRIEGER, Jose Eduardo
TEODORO, Douglas
MORO, Claudia
Citação
2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS, p.378-381, 2023
Projetos de Pesquisa
Unidades Organizacionais
Fascículo
Resumo
Contextual word embeddings and the Transformers architecture have reached state-of-the-art results in many natural language processing (NLP) tasks and improved the adaptation of models for multiple domains. Despite the improvement in the reuse and construction of models, few resources are still developed for the Portuguese language, especially in the health domain. Furthermore, the clinical models available for the language are not representative enough for all medical specialties. This work explores deep contextual embedding models for the Portuguese language to support clinical NLP tasks. We transferred learned information from electronic health records of a Brazilian tertiary hospital specialized in cardiology diseases and pre-trained multiple clinical BERT-based models. We evaluated the performance of these models in named entity recognition experiments, fine-tuning them in two annotated corpora containing clinical narratives. Our pre-trained models outperformed previous multilingual and Portuguese BERT-based models for cardiology and multi-specialty environments, reaching the state-of-the-art for analyzed corpora, with 5.5% F1 score improvement in TempClinBr (all entities) and 1.7% in SemClinBr (Disorder entity) corpora. Hence, we demonstrate that data representativeness and a high volume of training data can improve the results for clinical tasks, aligned with results for other languages.
Palavras-chave
natural language processing, transformer, clinical texts, language model
Referências
  1. Alsentzer Emily, 2019, P 2 CLIN NATURAL LAN, DOI [DOI 10.18653/V1/W19-1909, 10.18653/v1/, DOI 10.18653/V1]
  2. Brown T. B., 2020, P ADV NEUR INF PROC, V33, P1877
  3. Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  4. Kalyan KS, 2022, J BIOMED INFORM, V126, DOI 10.1016/j.jbi.2021.103982
  5. Laparra Egoitz, 2021, Yearb Med Inform, V30, P239, DOI 10.1055/s-0041-1726522
  6. Lee J, 2020, BIOINFORMATICS, V36, P1234, DOI 10.1093/bioinformatics/btz682
  7. Schneider ETR, 2020, P 3 CLIN NATURAL LAN, P65, DOI 10.18653/V1/2020.CLINICALNLP-1.7
  8. Oliveira LESE, 2022, J BIOMED SEMANT, V13, DOI 10.1186/s13326-022-00269-1
  9. Souza Fabio, 2020, Intelligent Systems. 9th Brazilian Conference, BRACIS 2020. Proceedings. Lecture Notes in Artificial Intelligence. Subseries of Lecture Notes in Computer Science (LNAI 12319), P403, DOI 10.1007/978-3-030-61377-8_28
  10. Tamine L, 2021, ACM COMPUT SURV, V54, DOI 10.1145/3462476
  11. TempClinBr, 2023, US
  12. Turchioe MR, 2022, HEART, V108, P909, DOI 10.1136/heartjnl-2021-319769
  13. Yu Gu, 2022, ACM Transactions on Computing and Healthcare, V3, DOI 10.1145/3458754