all AI news
High Dimensional Distributed Gradient Descent with Arbitrary Number of Byzantine Attackers
March 28, 2024, 4:43 a.m. | Puning Zhao, Zhiguo Wan
cs.LG updates on arXiv.org arxiv.org
Abstract: Robust distributed learning with Byzantine failures has attracted extensive research interests in recent years. However, most of existing methods suffer from curse of dimensionality, which is increasingly serious with the growing complexity of modern machine learning models. In this paper, we design a new method that is suitable for high dimensional problems, under arbitrary number of Byzantine attackers. The core of our design is a direct high dimensional semi-verified mean estimation method. Our idea is …
abstract arxiv complexity cs.cr cs.dc cs.lg design dimensionality distributed distributed learning gradient however machine machine learning machine learning models modern paper research robust type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Sr. BI Analyst
@ AkzoNobel | Pune, IN