April 22, 2024, 4:43 a.m. | Sarthak Choudhary, Aashish Kolluri, Prateek Saxena

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.14461v2 Announce Type: replace-cross
Abstract: Training modern neural networks or models typically requires averaging over a sample of high-dimensional vectors. Poisoning attacks can skew or bias the average vectors used to train the model, forcing the model to learn specific patterns or avoid learning anything useful. Byzantine robust aggregation is a principled algorithmic defense against such biasing. Robust aggregators can bound the maximum bias in computing centrality statistics, such as mean, even when some fraction of inputs are arbitrarily corrupted. …

abstract aggregation arxiv attacks bias cs.ai cs.cr cs.lg dimensions learn modern networks neural networks patterns poisoning attacks robust sample skew train training type vectors

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv