March 28, 2024, 4:45 a.m. | Tian Ye, Rajgopal Kannan, Viktor Prasanna, Carl Busart

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.18318v1 Announce Type: new
Abstract: Adversarial attacks have demonstrated the vulnerability of Machine Learning (ML) image classifiers in Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems. An adversarial attack can deceive the classifier into making incorrect predictions by perturbing the input SAR images, for example, with a few scatterers attached to the on-ground objects. Therefore, it is critical to develop robust SAR ATR systems that can detect potential adversarial attacks by leveraging the inherent uncertainty in ML classifiers, thereby …

abstract adversarial adversarial attacks arxiv attacks bayesian classifier classifiers cs.cv example image images machine machine learning making networks neural networks predictions radar recognition synthetic systems type uncertainty via vulnerability

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist, Demography and Survey Science, University Grad

@ Meta | Menlo Park, CA | New York City

Computer Vision Engineer, XR

@ Meta | Burlingame, CA