April 22, 2024, 4:41 a.m. | Samarth Bhatia, Felix Dangel

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.12406v1 Announce Type: new
Abstract: Memory is a limiting resource for many deep learning tasks. Beside the neural network weights, one main memory consumer is the computation graph built up by automatic differentiation (AD) for backpropagation. We observe that PyTorch's current AD implementation neglects information about parameter differentiability when storing the computation graph. This information is useful though to reduce memory whenever gradients are requested for a parameter subset, as is the case in many modern fine-tuning tasks. Specifically, inputs …

abstract arxiv backpropagation computation consumer consumption cs.lg current deep learning differentiation graph implementation information memory memory consumption network neural network observe pytorch tasks type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne