Web: http://arxiv.org/abs/2110.04478

May 5, 2022, 1:12 a.m. | Saeed Rashidi, William Won, Sudarshan Srinivasan, Srinivas Sridharan, Tushar Krishna

cs.LG updates on arXiv.org arxiv.org

Distributed training is a solution to reduce DNN training time by splitting
the task across multiple NPUs (e.g., GPU/TPU). However, distributed training
adds communication overhead between the NPUs in order to synchronize the
gradients and/or activation, depending on the parallelization strategy. In
next-generation platforms for training at scale, NPUs will be connected through
multi-dimensional networks with diverse, heterogeneous bandwidths. This work
identifies a looming challenge of keeping all network dimensions busy and
maximizing the network BW within the hybrid environment …

arxiv distributed dl models network policy scheduling training

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California