Jan. 31, 2024, 4:45 p.m. | Suchita Pati, Shaizeen Aga, Mahzabeen Islam, Nuwan Jayasena, Matthew D. Sinclair

cs.LG updates on arXiv.org arxiv.org

Large Language Models increasingly rely on distributed techniques for their
training and inference. These techniques require communication across devices
which can reduce scaling efficiency as the number of devices increases. While
some distributed techniques can overlap, and thus, hide this communication with
independent computations, techniques such as Tensor Parallelism (TP) inherently
serialize communication with model execution. One approach to hide this
serialized communication is to interleave it with the producer operation (of
the communicated data) in a fine-grained manner. However, …

arxiv communication compute cs.ar devices distributed efficiency fine-grained hide independent inference language language models large language large language models reduce scaling tracking training

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne