Feb. 13, 2024, 5:44 a.m. | Haoyu Li Yuchen Xu Jiayi Chen Rohit Dwivedula Wenfei Wu Keqiang He Aditya Akella Daehyeok Kim

cs.LG updates on arXiv.org arxiv.org

As deep neural networks (DNNs) grow in complexity and size, the resultant increase in communication overhead during distributed training has become a significant bottleneck, challenging the scalability of distributed training systems. Existing solutions, while aiming to mitigate this bottleneck through worker-level compression and in-network aggregation, fall short due to their inability to efficiently reconcile the trade-offs between compression effectiveness and computational overhead, hindering overall performance and scalability. In this paper, we introduce a novel compression algorithm that effectively merges worker-level …

aggregation become communication complexity compression cs.dc cs.ds cs.lg cs.ni deep learning distributed network networks neural networks scalability solutions systems through training

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Data Architect

@ Dyson | India - Bengaluru IT Capability Centre

GTM Operation and Marketing Data Analyst

@ DataVisor | Toronto, Ontario, Canada - Remote

Associate - Strategy & Business Intelligence

@ Hitachi | (HE)Office Rotterdam

Senior Executive - Data Analysis

@ Publicis Groupe | Beirut, Lebanon