GABRIEL VANSUITA VALENTE

(Fonte: Lattes)
Índice h a partir de 2011
1
Projetos de Pesquisa
Unidades Organizacionais
LIM/23 - Laboratório de Psicopatologia e Terapêutica Psiquiátrica, Hospital das Clínicas, Faculdade de Medicina

Resultados de Busca

Agora exibindo 1 - 3 de 3
  • conferenceObject
    Evaluation of an AI system for breast cancer screening in mammograms of young women.
    (2020) PETRINI, Daniel Gustavo Pellacani; VALENTE, Gabriel Vansuita; SHIMIZU, Carlos; ROELA, Rosimeire Aparecida; ARAUJO, Gabriel Miranda de; TUCUNDUVA, Tatiana Cardoso de Mello; FOLGUEIRA, Maria A. A. Koike; KIM, Hae Yong
  • conferenceObject
    Deep learning algorithm performance in mammography screening: A systematic review and meta-analysis.
    (2021) ROELA, Rosimeire Aparecida; VALENTE, Gabriel Vansuita; SHIMIZU, Carlos; LOPEZ, Rossana Veronica Mendoza; TUCUNDUVA, Tatiana Cardoso de Mello; FOLGUEIRA, Guilherme Koike; KATAYAMA, Maria Lucia Hirata; PETRINI, Daniel Gustavo Pellacani; NOVAES, Guilherme Apolinario Silva; SERIO, Pedro Adolpho de Menezes Pacheco; MARTA, Guilherme Nader; SAMESHIMA, Koichi; KIM, Hae Yong; FOLGUEIRA, Maria A. A. Koike
  • article 18 Citação(ões) na Scopus
    Breast Cancer Diagnosis in Two-View Mammography Using End-to-End Trained EfficientNet-Based Convolutional Network
    (2022) PETRINI, Daniel G. P.; SHIMIZU, Carlos; ROELA, Rosimeire A.; VALENTE, Gabriel Vansuita; FOLGUEIRA, Maria Aparecida Azevedo Koike; KIM, Hae Yong
    Some recent studies have described deep convolutional neural networks to diagnose breast cancer in mammograms with similar or even superior performance to that of human experts. One of the best techniques does two transfer learnings: the first uses a model trained on natural images to create a ""patch classifier"" that categorizes small subimages; the second uses the patch classifier to scan the whole mammogram and create the ""single-view whole-image classifier"". We propose to make a third transfer learning to obtain a ""two-view classifier"" to use the two mammographic views: bilateral craniocaudal and mediolateral oblique. We use EfficientNet as the basis of our model. We ""end-to-end"" train the entire system using CBIS-DDSM dataset. To ensure statistical robustness, we test our system twice using: (a) 5-fold cross validation; and (b) the original training/test division of the dataset. Our technique reached an AUC of 0.9344 using 5-fold cross validation (accuracy, sensitivity and specificity are 85.13% at the equal error rate point of ROC). Using the original dataset division, our technique achieved an AUC of 0.8483, as far as we know the highest reported AUC for this problem, although the subtle differences in the testing conditions of each work do not allow for an accurate comparison. The inference code and model are available at https://github.com/dpetrini/two-views-classifier