Breast Cancer Diagnosis in Two-View Mammography Using End-to-End Trained EfficientNet-Based Convolutional Network
Carregando...
Citações na Scopus
18
Tipo de produção
article
Data de publicação
2022
Título da Revista
ISSN da Revista
Título do Volume
Editora
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Autores
PETRINI, Daniel G. P.
KIM, Hae Yong
Citação
IEEE ACCESS, v.10, p.77723-77731, 2022
Resumo
Some recent studies have described deep convolutional neural networks to diagnose breast cancer in mammograms with similar or even superior performance to that of human experts. One of the best techniques does two transfer learnings: the first uses a model trained on natural images to create a ""patch classifier"" that categorizes small subimages; the second uses the patch classifier to scan the whole mammogram and create the ""single-view whole-image classifier"". We propose to make a third transfer learning to obtain a ""two-view classifier"" to use the two mammographic views: bilateral craniocaudal and mediolateral oblique. We use EfficientNet as the basis of our model. We ""end-to-end"" train the entire system using CBIS-DDSM dataset. To ensure statistical robustness, we test our system twice using: (a) 5-fold cross validation; and (b) the original training/test division of the dataset. Our technique reached an AUC of 0.9344 using 5-fold cross validation (accuracy, sensitivity and specificity are 85.13% at the equal error rate point of ROC). Using the original dataset division, our technique achieved an AUC of 0.8483, as far as we know the highest reported AUC for this problem, although the subtle differences in the testing conditions of each work do not allow for an accurate comparison. The inference code and model are available at https://github.com/dpetrini/two-views-classifier
Palavras-chave
Mammography, Convolutional neural networks, Training, Transfer learning, Breast cancer, Artificial intelligence, Lesions, Breast cancer diagnosis, deep learning, convolutional neural network, mammogram, transfer learning
Referências
- Alsolami A. S., 2021, DATA, V6, P111
- Bowyer K, 1996, INT CONGR SER, V1119, P431
- Almeida RMD, 2021, PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS (ICEIS 2021), VOL 1, P660, DOI 10.5220/0010440906600667
- Gotmare A., 2018, ARXIV
- HANLEY JA, 1982, RADIOLOGY, V143, P29, DOI 10.1148/radiology.143.1.7063747
- He KM, 2016, PROC CVPR IEEE, P770, DOI 10.1109/CVPR.2016.90
- Kooi T, 2017, MED IMAGE ANAL, V35, P303, DOI 10.1016/j.media.2016.07.007
- Krizhevsky Alex, 2017, Communications of the ACM, V60, P84, DOI 10.1145/3065386
- LeCun Y, 1989, NEURAL COMPUT, V1, P541, DOI 10.1162/neco.1989.1.4.541
- LeCun Y., 2015, NATURE, V521, P436, DOI [DOI 10.1038/NATURE14539, 10.1038/nature14539]
- Lee RS, 2017, SCI DATA, V4, DOI 10.1038/sdata.2017.177
- McKinney SM, 2020, NATURE, V577, P89, DOI 10.1038/s41586-019-1799-6
- Moreira IC, 2012, ACAD RADIOL, V19, P236, DOI 10.1016/j.acra.2011.09.014
- Panceri S. S., 2021, PROC INT JOINT C NEU, P1
- Petrini D. G., 2021, CANCER RES, V81
- Petrini DG, 2021, CANCER RES, V81
- Pham H. H., 2022, PHYSIONET, P1
- Rodriguez-Ruiz A, 2019, JNCI-J NATL CANCER I, V111, P916, DOI 10.1093/jnci/djy222
- Russakovsky O, 2015, INT J COMPUT VISION, V115, P211, DOI 10.1007/s11263-015-0816-y
- Schaffter T, 2020, JAMA NETW OPEN, V3, DOI 10.1001/jamanetworkopen.2020.0265
- Shen L., INCONSISTENT RESULTS
- Shen L, 2019, SCI REP-UK, V9, DOI 10.1038/s41598-019-48995-4
- Shu X, 2020, IEEE T MED IMAGING, V39, P2246, DOI 10.1109/TMI.2020.2968397
- Sorkhei M., 2021, PROC NEURAL INF PROC, V1, P1
- Tan MX, 2019, PROC CVPR IEEE, P2815, DOI 10.1109/CVPR.2019.00293
- Tan MX, 2019, PR MACH LEARN RES, V97
- Wei T., 2021, ARXIV
- Wu N, 2020, IEEE T MED IMAGING, V39, P1184, DOI 10.1109/TMI.2019.2945514
- Zhang XY, 2016, IEEE T PATTERN ANAL, V38, P1943, DOI 10.1109/TPAMI.2015.2502579