March 28, 2024, 4:45 a.m. | Tian Ye, Rajgopal Kannan, Viktor Prasanna, Carl Busart

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.18318v1 Announce Type: new
Abstract: Adversarial attacks have demonstrated the vulnerability of Machine Learning (ML) image classifiers in Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems. An adversarial attack can deceive the classifier into making incorrect predictions by perturbing the input SAR images, for example, with a few scatterers attached to the on-ground objects. Therefore, it is critical to develop robust SAR ATR systems that can detect potential adversarial attacks by leveraging the inherent uncertainty in ML classifiers, thereby …

abstract adversarial adversarial attacks arxiv attacks bayesian classifier classifiers cs.cv example image images machine machine learning making networks neural networks predictions radar recognition synthetic systems type uncertainty via vulnerability

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US