March 7, 2024, 5:42 a.m. | Minghao LiHarvard University, Ran Ben BasatUniversity College London, Shay VargaftikVMware Research, ChonLam LaoHarvard University, Kevin XuHarvard Un

cs.LG updates on arXiv.org arxiv.org

arXiv:2302.08545v2 Announce Type: replace
Abstract: Deep neural networks (DNNs) are the de facto standard for essential use cases, such as image classification, computer vision, and natural language processing. As DNNs and datasets get larger, they require distributed training on increasingly larger clusters. A main bottleneck is the resulting communication overhead where workers exchange model updates (i.e., gradients) on a per-round basis. To address this bottleneck and accelerate training, a widely-deployed approach is compression. However, previous deployments often apply bi-directional compression …

abstract and natural language processing arxiv cases classification communication compression computer computer vision cs.ai cs.lg cs.ni datasets deep learning distributed image language language processing natural natural language natural language processing networks neural networks processing standard tensor training type use cases vision

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN