Web: http://arxiv.org/abs/2209.06931

Sept. 16, 2022, 1:11 a.m. | Alexander Cann, Ian Colbert, Ihab Amer

cs.LG updates on arXiv.org arxiv.org

The widespread adoption of deep neural networks in computer vision
applications has brought forth a significant interest in adversarial
robustness. Existing research has shown that maliciously perturbed inputs
specifically tailored for a given model (i.e., adversarial examples) can be
successfully transferred to another independently trained model to induce
prediction errors. Moreover, this property of adversarial examples has been
attributed to features derived from predictive patterns in the data
distribution. Thus, we are motivated to investigate the following question: Can
adversarial …

arxiv feature networks

More from arxiv.org / cs.LG updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Research Engineer - VFX, Neural Compositing

@ Flawless | Los Angeles, California, United States

[Job-TB] Senior Data Engineer

@ CI&T | Brazil

Data Analytics Engineer

@ The Fork | Paris, France