March 6, 2024, 5:43 a.m. | Daniele De Sensi, Tommaso Bonato, David Saam, Torsten Hoefler

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.09356v2 Announce Type: replace-cross
Abstract: The allreduce collective operation accounts for a significant fraction of the runtime of workloads running on distributed systems. One factor determining its performance is the distance between communicating nodes, especially on networks like torus, where a higher distance implies multiple messages being forwarded on the same link, thus reducing the allreduce bandwidth. Torus networks are widely used on systems optimized for machine learning workloads (e.g., Google TPUs and Amazon Trainium devices), as well as on …

abstract arxiv bandwidth collective cs.dc cs.lg cs.ni cs.pf distributed distributed systems messages multiple networks nodes performance running systems type workloads

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India