all AI news
Distributed Learning based on 1-Bit Gradient Coding in the Presence of Stragglers
March 25, 2024, 4:41 a.m. | Chengxi Li, Mikael Skoglund
cs.LG updates on arXiv.org arxiv.org
Abstract: This paper considers the problem of distributed learning (DL) in the presence of stragglers. For this problem, DL methods based on gradient coding have been widely investigated, which redundantly distribute the training data to the workers to guarantee convergence when some workers are stragglers. However, these methods require the workers to transmit real-valued vectors during the process of learning, which induces very high communication burden. To overcome this drawback, we propose a novel DL method …
abstract arxiv coding convergence cs.dc cs.lg data distributed distributed learning gradient paper training training data type workers
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Market Development Specialist - M2P & Automation ( Location - Bangalore/Mumbai))
@ Danaher | IND - Bengaluru North - Beckman Coulter India Private Limited
Senior Software Engineer - AI Compilers
@ Microsoft | Redmond, Washington, United States
Senior AI Platform Engineer
@ AstraZeneca | Spain - Barcelona