Feb. 28, 2024, 5:41 a.m. | Leonid Boytsov, Ameya Joshi, Filipe Condessa

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.17018v1 Announce Type: new
Abstract: We tested front-end enhanced neural models where a frozen classifier was prepended by a differentiable and fully convolutional model with a skip connection. By training them using a small learning rate for about one epoch, we obtained models that retained the accuracy of the backbone classifier while being unusually resistant to gradient attacks including APGD and FAB-T attacks from the AutoAttack package, which we attributed to gradient masking. The gradient masking phenomenon is not new, …

abstract arxiv attacks case classifier cs.ai cs.cv cs.lg differentiable front-end gradient rate resilience small them training type via

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Modeler

@ Sherwin-Williams | Cleveland, OH, United States