Web: http://arxiv.org/abs/2206.01736

June 23, 2022, 1:13 a.m. | Linhai Ma, Liang Liang

cs.CV updates on arXiv.org arxiv.org

It is known that Deep Neural networks (DNNs) are vulnerable to adversarial
attacks, and the adversarial robustness of DNNs could be improved by adding
adversarial noises to training data (e.g., the standard adversarial training
(SAT)). However, inappropriate noises added to training data may reduce a
model's performance, which is termed the trade-off between accuracy and
robustness. This problem has been sufficiently studied for the classification
of whole images but has rarely been explored for image analysis tasks in the
medical …

arxiv detection image medical robustness segmentation training

More from arxiv.org / cs.CV updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY