Breast Cancer Diagnosis in Two-View Mammography Using End-to-End Trained EfficientNet-Based Convolutional Network

Carregando...
Imagem de Miniatura
Citações na Scopus
18
Tipo de produção
article
Data de publicação
2022
Título da Revista
ISSN da Revista
Título do Volume
Editora
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Citação
IEEE ACCESS, v.10, p.77723-77731, 2022
Projetos de Pesquisa
Unidades Organizacionais
Fascículo
Resumo
Some recent studies have described deep convolutional neural networks to diagnose breast cancer in mammograms with similar or even superior performance to that of human experts. One of the best techniques does two transfer learnings: the first uses a model trained on natural images to create a ""patch classifier"" that categorizes small subimages; the second uses the patch classifier to scan the whole mammogram and create the ""single-view whole-image classifier"". We propose to make a third transfer learning to obtain a ""two-view classifier"" to use the two mammographic views: bilateral craniocaudal and mediolateral oblique. We use EfficientNet as the basis of our model. We ""end-to-end"" train the entire system using CBIS-DDSM dataset. To ensure statistical robustness, we test our system twice using: (a) 5-fold cross validation; and (b) the original training/test division of the dataset. Our technique reached an AUC of 0.9344 using 5-fold cross validation (accuracy, sensitivity and specificity are 85.13% at the equal error rate point of ROC). Using the original dataset division, our technique achieved an AUC of 0.8483, as far as we know the highest reported AUC for this problem, although the subtle differences in the testing conditions of each work do not allow for an accurate comparison. The inference code and model are available at https://github.com/dpetrini/two-views-classifier
Palavras-chave
Mammography, Convolutional neural networks, Training, Transfer learning, Breast cancer, Artificial intelligence, Lesions, Breast cancer diagnosis, deep learning, convolutional neural network, mammogram, transfer learning
Referências
  1. Alsolami A. S., 2021, DATA, V6, P111
  2. Bowyer K, 1996, INT CONGR SER, V1119, P431
  3. Almeida RMD, 2021, PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS (ICEIS 2021), VOL 1, P660, DOI 10.5220/0010440906600667
  4. Gotmare A., 2018, ARXIV
  5. HANLEY JA, 1982, RADIOLOGY, V143, P29, DOI 10.1148/radiology.143.1.7063747
  6. He KM, 2016, PROC CVPR IEEE, P770, DOI 10.1109/CVPR.2016.90
  7. Kooi T, 2017, MED IMAGE ANAL, V35, P303, DOI 10.1016/j.media.2016.07.007
  8. Krizhevsky Alex, 2017, Communications of the ACM, V60, P84, DOI 10.1145/3065386
  9. LeCun Y, 1989, NEURAL COMPUT, V1, P541, DOI 10.1162/neco.1989.1.4.541
  10. LeCun Y., 2015, NATURE, V521, P436, DOI [DOI 10.1038/NATURE14539, 10.1038/nature14539]
  11. Lee RS, 2017, SCI DATA, V4, DOI 10.1038/sdata.2017.177
  12. McKinney SM, 2020, NATURE, V577, P89, DOI 10.1038/s41586-019-1799-6
  13. Moreira IC, 2012, ACAD RADIOL, V19, P236, DOI 10.1016/j.acra.2011.09.014
  14. Panceri S. S., 2021, PROC INT JOINT C NEU, P1
  15. Petrini D. G., 2021, CANCER RES, V81
  16. Petrini DG, 2021, CANCER RES, V81
  17. Pham H. H., 2022, PHYSIONET, P1
  18. Rodriguez-Ruiz A, 2019, JNCI-J NATL CANCER I, V111, P916, DOI 10.1093/jnci/djy222
  19. Russakovsky O, 2015, INT J COMPUT VISION, V115, P211, DOI 10.1007/s11263-015-0816-y
  20. Schaffter T, 2020, JAMA NETW OPEN, V3, DOI 10.1001/jamanetworkopen.2020.0265
  21. Shen L., INCONSISTENT RESULTS
  22. Shen L, 2019, SCI REP-UK, V9, DOI 10.1038/s41598-019-48995-4
  23. Shu X, 2020, IEEE T MED IMAGING, V39, P2246, DOI 10.1109/TMI.2020.2968397
  24. Sorkhei M., 2021, PROC NEURAL INF PROC, V1, P1
  25. Tan MX, 2019, PROC CVPR IEEE, P2815, DOI 10.1109/CVPR.2019.00293
  26. Tan MX, 2019, PR MACH LEARN RES, V97
  27. Wei T., 2021, ARXIV
  28. Wu N, 2020, IEEE T MED IMAGING, V39, P1184, DOI 10.1109/TMI.2019.2945514
  29. Zhang XY, 2016, IEEE T PATTERN ANAL, V38, P1943, DOI 10.1109/TPAMI.2015.2502579