Evaluation of the impact of physical adversarial attacks on deep learning models for classifying covid cases

dc.contributorSistema FMUSP-HC: Faculdade de Medicina da Universidade de São Paulo (FMUSP) e Hospital das Clínicas da FMUSP
dc.contributor.authorAGUIAR, Erikson J. de
dc.contributor.authorMARCOMINI, Karem D.
dc.contributor.authorQUIRINO, Felipe A.
dc.contributor.authorGUTIERREZ, Marco A.
dc.contributor.authorTRAINA JR., Caetano
dc.contributor.authorTRAINA, Agma J. M.
dc.date.accessioned2022-11-25T13:49:43Z
dc.date.available2022-11-25T13:49:43Z
dc.date.issued2022
dc.description.abstractThe SARS-CoV-2 (COVID-19) disease rapidly spread worldwide, thus increasing the need to create new strategies to fight it. Several researchers in different fields have attempted to develop methods to early identifying it and mitigating its effects. The Deep Learning (DL) approach, such as the Convolutional Neural Networks (CNNs), has been increasingly used in COVID-19 diagnoses. These models intend to support decision-making and are doing well to detecting patient status early. Although DL models have good accuracy to support diagnosis, they are vulnerable to Adversarial Attacks. These attacks are new methods to make DL models biased by adding small perturbations on the original image. This paper investigates the impact of Adversarial Attacks on DL models for classifying X-ray images of COVID-19 cases. We focused on the attack Fast Gradient Sign Method (FGSM), which aims to add perturbations to the testing images by combining a perturbation matrix, producing a crafted image. We conduct the experiments analyzing the model's performance attack-free and adding attacks. The following CNNs models were selected: DenseNet201, ResNet-50V2, MobileNetV2, NasNet and VGG16. In the attack-free environment, we reach precision around 99%. When it adds the attack, our results revealed that all models suffer from performance reduction, and the most affected was MobileNet that reduced its ability from 98.61% to 67.73%. However, the VGG16 network showed to be the least affected by the attacks. Our finds describe that DL models for COVID-19 are vulnerable to Adversarial Examples. The FGSM was capable of fooling the model, resulting in a significant reduction in the DL performance.eng
dc.description.conferencedateFEB 20-MAR 27, 2022
dc.description.conferencelocalELECTR NETWORK
dc.description.conferencenameConference on Medical Imaging - Computer-Aided Diagnosis
dc.description.indexPubMedeng
dc.description.sponsorshipSao Paulo Research Foundation (FAPESP) [2020/14180-4, 2016/17078-0, 2020/07200-9]
dc.description.sponsorshipCoordenacao de Aperfeicoamento de Pessoal de Nivel Superior-Brasil (CAPES) [001]
dc.description.sponsorshipConselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) [131055/2021-6, 382495/2021-7]
dc.identifier.citationMEDICAL IMAGING 2022: COMPUTER-AIDED DIAGNOSIS, v.12033, article ID 120332P, 7p, 2022
dc.identifier.doi10.1117/12.2611199
dc.identifier.eissn1996-756X
dc.identifier.isbn978-1-5106-4942-2; 978-1-5106-4941-5
dc.identifier.issn0277-786X
dc.identifier.urihttps://observatorio.fm.usp.br/handle/OPI/50334
dc.language.isoeng
dc.publisherSPIE-INT SOC OPTICAL ENGINEERINGeng
dc.relation.ispartofMedical Imaging 2022: Computer-Aided Diagnosis
dc.relation.ispartofseriesProceedings of SPIE
dc.rightsrestrictedAccesseng
dc.rights.holderCopyright SPIE-INT SOC OPTICAL ENGINEERINGeng
dc.subjectAdversarial attackseng
dc.subjectdeep neural networkseng
dc.subjectCOVID-19eng
dc.subjectFast Gradient Sign Methodeng
dc.subject.wosEngineering, Biomedicaleng
dc.subject.wosOpticseng
dc.subject.wosRadiology, Nuclear Medicine & Medical Imagingeng
dc.titleEvaluation of the impact of physical adversarial attacks on deep learning models for classifying covid caseseng
dc.typeconferenceObjecteng
dc.type.categoryproceedings papereng
dc.type.versionpublishedVersioneng
dspace.entity.typePublication
hcfmusp.author.externalAGUIAR, Erikson J. de:Univ Sao Paulo, Inst Math & Comp Sci, Sao Carlos, Brazil
hcfmusp.author.externalMARCOMINI, Karem D.:Univ Sao Paulo, Inst Math & Comp Sci, Sao Carlos, Brazil
hcfmusp.author.externalQUIRINO, Felipe A.:Univ Sao Paulo, Inst Math & Comp Sci, Sao Carlos, Brazil
hcfmusp.author.externalTRAINA JR., Caetano:Univ Sao Paulo, Inst Math & Comp Sci, Sao Carlos, Brazil
hcfmusp.author.externalTRAINA, Agma J. M.:Univ Sao Paulo, Inst Math & Comp Sci, Sao Carlos, Brazil
hcfmusp.citation.scopus2
hcfmusp.contributor.author-fmusphcMARCO ANTONIO GUTIERREZ
hcfmusp.description.articlenumber120332P
hcfmusp.description.volume12033
hcfmusp.origemWOS
hcfmusp.origem.scopus2-s2.0-85132837101
hcfmusp.origem.wosWOS:000838048600095
hcfmusp.publisher.cityBELLINGHAMeng
hcfmusp.publisher.countryUSAeng
hcfmusp.relation.referenceAli Z, 2016, INDIAN J ANAESTH, V60, P662, DOI 10.4103/0019-5049.190623eng
hcfmusp.relation.referenceCarlini N, 2017, P IEEE S SECUR PRIV, P39, DOI 10.1109/SP.2017.49eng
hcfmusp.relation.referenceChih-Ling Chang, 2020, SPAI '20: Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence, P47, DOI 10.1145/3385003.3410920eng
hcfmusp.relation.referenceChowdhury MEH, 2020, IEEE ACCESS, V8, P132665, DOI 10.1109/ACCESS.2020.3010287eng
hcfmusp.relation.referenceDemsar J, 2006, J MACH LEARN RES, V7, P1eng
hcfmusp.relation.referenceFezza SA, 2019, INT WORK QUAL MULTIMeng
hcfmusp.relation.referenceGoodfellow I.J., 2015, INT C LEARN REPReng
hcfmusp.relation.referenceHuang Ling, 2011, P 4 ACM WORKSHOP SEC, P43eng
hcfmusp.relation.referenceLi X, 2021, I S BIOMED IMAGING, P1677, DOI 10.1109/ISBI48211.2021.9433761eng
hcfmusp.relation.referenceMadry Aleksander, 2017, DEEP LEARNING MODELSeng
hcfmusp.relation.referenceOzbulak U, 2019, LECT NOTES COMPUT SC, V11765, P300, DOI 10.1007/978-3-030-32245-8_34eng
hcfmusp.relation.referencePereira DG, 2015, COMMUN STAT-SIMUL C, V44, P2636, DOI 10.1080/03610918.2014.931971eng
hcfmusp.relation.referenceRahman MA, 2021, IEEE INTERNET THINGS, V8, P9603, DOI 10.1109/JIOT.2020.3013710eng
hcfmusp.relation.referenceRahman T, 2021, COMPUT BIOL MED, V132, DOI 10.1016/j.compbiomed.2021.104319eng
hcfmusp.scopus.lastupdate2024-05-17
relation.isAuthorOfPublication23ec3b55-50df-4630-902e-bedbb470fecb
relation.isAuthorOfPublication.latestForDiscovery23ec3b55-50df-4630-902e-bedbb470fecb
Arquivos
Pacote Original
Agora exibindo 1 - 1 de 1
Nenhuma Miniatura disponível
Nome:
art_MARCOMINI_Evaluation_of_the_impact_of_physical_adversarial_attacks_2022.PDF
Tamanho:
1.24 MB
Formato:
Adobe Portable Document Format
Descrição:
publishedVersion (English)