Jan. 6, 2022, 2:10 a.m. | Amira Guesmi, Khaled N. Khasawneh, Nael Abu-Ghazaleh, Ihsen Alouani

cs.LG updates on arXiv.org arxiv.org

Advances in deep learning have enabled a wide range of promising
applications. However, these systems are vulnerable to Adversarial Machine
Learning (AML) attacks; adversarially crafted perturbations to their inputs
could cause them to misclassify. Several state-of-the-art adversarial attacks
have demonstrated that they can reliably fool classifiers making these attacks
a significant threat. Adversarial attack generation algorithms focus primarily
on creating successful examples while controlling the noise magnitude and
distribution to make detection more difficult. The underlying assumption of
these attacks …

adversarial machine learning arxiv attacks learning machine machine learning real-time time

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Program Control Data Analyst

@ Ford Motor Company | Mexico

Vice President, Business Intelligence / Data & Analytics

@ AlphaSense | Remote - United States