March 21, 2024, 4:41 a.m. | Fabio De Gaspari, Dorjan Hitaj, Luigi V. Mancini

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.13523v1 Announce Type: new
Abstract: The unprecedented availability of training data fueled the rapid development of powerful neural networks in recent years. However, the need for such large amounts of data leads to potential threats such as poisoning attacks: adversarial manipulations of the training data aimed at compromising the learned model to achieve a given adversarial goal.
This paper investigates defenses against clean-label poisoning attacks and proposes a novel approach to detect and filter poisoned datapoints in the transfer learning …

abstract adversarial arxiv attacks availability cs.ai cs.cr cs.lg data data poisoning development however leads networks neural networks poisoning attacks threats training training data type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US