The use of deep learning state-of-the-art architectures for oral epithelial dysplasia grading: A comparative appraisal

Nenhuma Miniatura disponível
Citações na Scopus
1
Tipo de produção
article
Data de publicação
2023
Título da Revista
ISSN da Revista
Título do Volume
Editora
WILEY
Autores
SILVA, Viviane Mariano da
MORAES, Matheus Cardoso
AMORIM, Henrique Alves de
FONSECA, Felipe Paiva
ST'ANA, Maria Sissa Pereira
MESQUITA, Ricardo Alves
MARIZ, Bruno Augusto Linhares Almeida
PONTES, Helder Antonio Rebelo
SOUZA, Lucas Lacerda de
Citação
JOURNAL OF ORAL PATHOLOGY & MEDICINE, v.52, n.10, p.980-987, 2023
Projetos de Pesquisa
Unidades Organizacionais
Fascículo
Resumo
Background: Dysplasia grading systems for oral epithelial dysplasia are a source of disagreement among pathologists. Therefore, machine learning approaches are being developed to mitigate this issue.Methods: This cross-sectional study included a cohort of 82 patients with oral potentially malignant disorders and correspondent 98 hematoxylin and eosin-stained whole slide images with biopsied-proven dysplasia. All whole-slide images were manually annotated based on the binary system for oral epithelial dysplasia. The annotated regions of interest were segmented and fragmented into small patches and non-randomly sampled into training/validation and test subsets. The training/validation data were color augmented, resulting in a total of 81,786 patches for training. The held-out independent test set enrolled a total of 4,486 patches. Seven state-of-the-art convolutional neural networks were trained, validated, and tested with the same dataset.Results: The models presented a high learning rate, yet very low generalization potential. At the model development, VGG16 performed the best, but with massive overfitting. In the test set, VGG16 presented the best accuracy, sensitivity, specificity, and area under the curve (62%, 62%, 66%, and 65%, respectively), associated with the higher loss among all Convolutional Neural Networks (CNNs) tested. EfficientB0 has comparable metrics and the lowest loss among all convolutional neural networks, being a great candidate for further studies.Conclusion: The models were not able to generalize enough to be applied in real-life datasets due to an overlapping of features between the two classes (i.e., high risk and low risk of malignization).
Palavras-chave
artificial intelligence, dysplasia grading, erythroleukoplakia, leukoplakia, whole slide images
Referências
  1. Baik J, 2014, CELL ONCOL, V37, P193, DOI 10.1007/s13402-014-0172-x
  2. Barnes L., 2005, WHO CLASSIFICATION T
  3. Chollet F, 2017, PROC CVPR IEEE, P1800, DOI 10.1109/CVPR.2017.195
  4. Collins GS, 2015, J CLIN EPIDEMIOL, V68, P112, DOI [10.1186/s12916-014-0241-z, 10.1016/j.eururo.2014.11.025, 10.1016/j.jclinepi.2014.11.010, 10.7326/M14-0697, 10.1002/bjs.9736, 10.1038/bjc.2014.639, 10.7326/M14-0698, 10.1136/bmj.g7594]
  5. El-Naggar AK., 2017, WHO CLASSIFICATION H
  6. Fraz MM, 2018, LECT NOTES COMPUT SC, V11039, P156, DOI 10.1007/978-3-030-00949-6_19
  7. Geetha K M, 2015, J Oral Maxillofac Pathol, V19, P198, DOI 10.4103/0973-029X.164533
  8. Hankinson PM, 2021, BRIT J ORAL MAX SURG, V59, P1099, DOI 10.1016/j.bjoms.2021.02.019
  9. Howard A. G., 2017, MobileNets: Efficient convolutional neural networks for mobile vision applications
  10. Huang G, 2017, PROC CVPR IEEE, P2261, DOI 10.1109/CVPR.2017.243
  11. Kaiming He, 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P770, DOI 10.1109/CVPR.2016.90
  12. Khoury ZH, 2022, INT J SURG PATHOL, V30, P499, DOI 10.1177/10668969211070171
  13. Krishnan MMR, 2011, MICRON, V42, P632, DOI 10.1016/j.micron.2011.03.003
  14. Krishnan MMR, 2009, COMPUT BIOL MED, V39, P1096, DOI 10.1016/j.compbiomed.2009.09.004
  15. Kujan O, 2007, ORAL ONCOL, V43, P224, DOI 10.1016/j.oraloncology.2006.03.009
  16. Kujan O, 2006, ORAL ONCOL, V42, P987, DOI 10.1016/j.oraloncology.2005.12.014
  17. Mahmood H, 2022, MODERN PATHOL, V35, P1151, DOI 10.1038/s41379-022-01067-x
  18. Maree Raphael, 2017, J Pathol Inform, V8, P19, DOI 10.4103/jpi.jpi_94_16
  19. Nankivell P, 2013, OR SURG OR MED OR PA, V115, P87, DOI 10.1016/j.oooo.2012.10.015
  20. Prema V, 2020, J PHARM BIOALLIED SC, V12, P204, DOI 10.4103/jpbs.JPBS_60_20
  21. Ranganathan Kannan, 2019, J Oral Maxillofac Pathol, V23, P19, DOI 10.4103/jomfp.JOMFP_13_19
  22. Shaban M, 2019, SCI REP-UK, V9, DOI 10.1038/s41598-019-49710-z
  23. Simonyan K, 2015, Arxiv, DOI [arXiv:1409.1556, 10.48550/arxiv.1409.1556]
  24. Speight PM, 2015, OR SURG OR MED OR PA, V120, P474, DOI 10.1016/j.oooo.2015.05.023
  25. Szegedy C., 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), V1, P1, DOI [DOI 10.1109/CVPR.2015.7298594, 10.1109/CVPR.2015.7298594]
  26. Szegedy C, 2016, PROC CVPR IEEE, P2818, DOI 10.1109/CVPR.2016.308
  27. Tan MX, 2019, PR MACH LEARN RES, V97
  28. van der Waal I, 2010, ORAL ONCOL, V46, P423, DOI 10.1016/j.oraloncology.2010.02.016
  29. Warnakulasuriya S, 2018, OR SURG OR MED OR PA, V125, P582, DOI 10.1016/j.oooo.2018.03.011