May 13, 2024, 4:42 a.m. | Satyadwyoom Kumar, Saurabh Gupta, Arun Balaji Buduru

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.06049v1 Announce Type: cross
Abstract: Deep Learning has become popular due to its vast applications in almost all domains. However, models trained using deep learning are prone to failure for adversarial samples and carry a considerable risk in sensitive applications. Most of these adversarial attack strategies assume that the adversary has access to the training data, the model parameters, and the input during deployment, hence, focus on perturbing the pixel level information present in the input image.
Adversarial Patches were …

abstract access adversarial applications arxiv become blackbox cs.cr cs.cv cs.lg deep learning domains failure however optimization popular risk samples strategies type vast

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Principal Research Engineer - Materials

@ GKN Aerospace | Westlake, TX, US

Internship in Data and Projects

@ Bosch Group | Mechelen, Belgium

Research Scientist- Applied Mechanics

@ Corning | Pune, MH, IN, 410501

Product Data Analyst

@ AUTODOC | Lisbon-remote