all AI news
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning. (arXiv:2205.01992v2 [cs.LG] UPDATED)
Sept. 2, 2022, 1:12 a.m. | Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marc
cs.LG updates on arXiv.org arxiv.org
The success of machine learning is fueled by the increasing availability of
computing power and large training datasets. The training data is used to learn
new models or update existing ones, assuming that it is sufficiently
representative of the data that will be encountered at test time. This
assumption is challenged by the threat of poisoning, an attack that manipulates
the training data to compromise the model's performance at test time. Although
poisoning has been acknowledged as a relevant threat …
arxiv data learning machine machine learning patterns security survey training training data
More from arxiv.org / cs.LG updates on arXiv.org
A Single-Loop Algorithm for Decentralized Bilevel Optimization
1 day, 10 hours ago |
arxiv.org
CLEANing Cygnus A deep and fast with R2D2
1 day, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Staff Software Engineer, Generative AI, Google Cloud AI
@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA
Expert Data Sciences
@ Gainwell Technologies | Any city, CO, US, 99999