March 25, 2024, 4:41 a.m. | Chengxi Li, Mikael Skoglund

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14716v1 Announce Type: new
Abstract: This paper considers the problem of distributed learning (DL) in the presence of stragglers. For this problem, DL methods based on gradient coding have been widely investigated, which redundantly distribute the training data to the workers to guarantee convergence when some workers are stragglers. However, these methods require the workers to transmit real-valued vectors during the process of learning, which induces very high communication burden. To overcome this drawback, we propose a novel DL method …

abstract arxiv coding convergence cs.dc cs.lg data distributed distributed learning gradient paper training training data type workers

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Market Development Specialist - M2P & Automation ( Location - Bangalore/Mumbai))

@ Danaher | IND - Bengaluru North - Beckman Coulter India Private Limited

Senior Software Engineer - AI Compilers

@ Microsoft | Redmond, Washington, United States

Senior AI Platform Engineer

@ AstraZeneca | Spain - Barcelona