June 29, 2022, 1:11 a.m. | Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz

stat.ML updates on arXiv.org arxiv.org

We consider the distributed SGD problem, where a main node distributes
gradient calculations among $n$ workers. By assigning tasks to all the workers
and waiting only for the $k$ fastest ones, the main node can trade-off the
algorithm's error with its runtime by gradually increasing $k$ as the algorithm
evolves. However, this strategy, referred to as adaptive $k$-sync, neglects the
cost of unused computations and of communicating models to workers that reveal
a straggling behavior. We propose a cost-efficient scheme …

arxiv cost distributed learning multi-armed bandits

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Staff Software Engineer, Generative AI, Google Cloud AI

@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA

Expert Data Sciences

@ Gainwell Technologies | Any city, CO, US, 99999