Feb. 13, 2024, 5:42 a.m. | Xabier Echeberria-Barrio Amaia Gil-Lerchundi Jon Egana-Zubia Raul Orduna-Urrutia

cs.LG updates on arXiv.org arxiv.org

In recent years, Deep Neural Network models have been developed in different fields, where they have brought many advances. However, they have also started to be used in tasks where risk is critical. A misdiagnosis of these models can lead to serious accidents or even death. This concern has led to an interest among researchers to study possible attacks on these models, discovering a long list of vulnerabilities, from which every model should be defended. The adversarial example attack is …

accidents advances adversarial adversarial examples assessment cs.cr cs.lg death deep learning deep neural network dynamic examples fields network neural network risk risk assessment tasks through understanding

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Scientist, Commercial Analytics

@ Checkout.com | London, United Kingdom

Data Engineer I

@ Love's Travel Stops | Oklahoma City, OK, US, 73120