Oct. 13, 2022, 1:12 a.m. | Sami Alabed, Dominik Grewe, Juliana Franco, Bart Chrzaszcz, Tom Natan, Tamara Norman, Norman A. Rink, Dimitrios Vytiniotis, Michael Schaarschmidt

cs.LG updates on arXiv.org arxiv.org

Large neural network models are commonly trained through a combination of
advanced parallelism strategies in a single program, multiple data (SPMD)
paradigm. For example, training large transformer models requires combining
data, model, and pipeline partitioning; and optimizer sharding techniques.
However, identifying efficient combinations for many model architectures and
accelerator systems requires significant manual analysis. In this work, we
present an automatic partitioner that identifies these combinations through a
goal-oriented search. Our key findings are that a Monte Carlo Tree Search-based …

arxiv discovery partitioning strategies

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne