March 29, 2024, 4:43 a.m. | Gaurav Kumar Nayak, Inder Khatri, Ruchit Rawal, Anirban Chakraborty

cs.LG updates on arXiv.org arxiv.org

arXiv:2211.01579v3 Announce Type: replace
Abstract: Several companies often safeguard their trained deep models (i.e., details of architecture, learnt weights, training details etc.) from third-party users by exposing them only as black boxes through APIs. Moreover, they may not even provide access to the training data due to proprietary reasons or sensitivity concerns. In this work, we propose a novel defense mechanism for black box models against adversarial attacks in a data-free set up. We construct synthetic data via generative model …

adversarial adversarial attacks arxiv attacks black box box cs.cr cs.cv cs.lg data defense free type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote