all AI news
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Feb. 21, 2024, 5:41 a.m. | Yiwei Lu, Matthew Y. R. Yang, Gautam Kamath, Yaoliang Yu
cs.LG updates on arXiv.org arxiv.org
Abstract: Machine learning models have achieved great success in supervised learning tasks for end-to-end training, which requires a large amount of labeled data that is not always feasible. Recently, many practitioners have shifted to self-supervised learning methods that utilize cheap unlabeled data to learn a general feature extractor via pre-training, which can be further applied to personalized downstream tasks by simply training an additional linear layer with limited labeled data. However, such a process may also …
abstract arxiv attacks cs.cr cs.lg data data poisoning feature general learn machine machine learning machine learning models poisoning attacks self-supervised learning success supervised learning tasks training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York