JOSE EDUARDO KRIEGER

(Fonte: Lattes)
Índice h a partir de 2011
36
Projetos de Pesquisa
Unidades Organizacionais
Departamento de Cardio-Pneumologia, Faculdade de Medicina - Docente
Instituto do Coração, Hospital das Clínicas, Faculdade de Medicina
LIM/13 - Laboratório de Genética e Cardiologia Molecular, Hospital das Clínicas, Faculdade de Medicina - Líder

Resultados de Busca

Agora exibindo 1 - 1 de 1
  • conferenceObject
    CardioBERTpt: Transformer-based Models for Cardiology Language Representation in Portuguese
    (2023) SCHNEIDER, Elisa Terumi Rubel; GUMIEL, Yohan Bonescki; SOUZA, Joao Vitor Andrioli de; MUKAI, Lilian Mie; OLIVEIRA, Lucas Emanuel Silva e; REBELO, Marina de Sa; GUTIERREZ, Marco Antonio; KRIEGER, Jose Eduardo; TEODORO, Douglas; MORO, Claudia; PARAISO, Emerson Cabrera
    Contextual word embeddings and the Transformers architecture have reached state-of-the-art results in many natural language processing (NLP) tasks and improved the adaptation of models for multiple domains. Despite the improvement in the reuse and construction of models, few resources are still developed for the Portuguese language, especially in the health domain. Furthermore, the clinical models available for the language are not representative enough for all medical specialties. This work explores deep contextual embedding models for the Portuguese language to support clinical NLP tasks. We transferred learned information from electronic health records of a Brazilian tertiary hospital specialized in cardiology diseases and pre-trained multiple clinical BERT-based models. We evaluated the performance of these models in named entity recognition experiments, fine-tuning them in two annotated corpora containing clinical narratives. Our pre-trained models outperformed previous multilingual and Portuguese BERT-based models for cardiology and multi-specialty environments, reaching the state-of-the-art for analyzed corpora, with 5.5% F1 score improvement in TempClinBr (all entities) and 1.7% in SemClinBr (Disorder entity) corpora. Hence, we demonstrate that data representativeness and a high volume of training data can improve the results for clinical tasks, aligned with results for other languages.