Jan. 26, 2022, 2:11 a.m. | Kabir Nagrecha, Arun Kumar

cs.LG updates on arXiv.org arxiv.org

Training deep learning (DL) models that do not fit into the memory of a
single GPU is a vexed process, forcing users to procure multiple GPUs to adopt
model-parallel execution. Unfortunately, sequential dependencies in neural
architectures often block efficient multi-device training, leading to
suboptimal performance. We present 'model spilling', a technique aimed at
models such as Transformers and CNNs to move groups of layers, or shards,
between DRAM and GPU memory, thus enabling arbitrarily large models to be
trained even …

arxiv deep learning learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru