April 22, 2024, 4:41 a.m. | Samarth Bhatia, Felix Dangel

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.12406v1 Announce Type: new
Abstract: Memory is a limiting resource for many deep learning tasks. Beside the neural network weights, one main memory consumer is the computation graph built up by automatic differentiation (AD) for backpropagation. We observe that PyTorch's current AD implementation neglects information about parameter differentiability when storing the computation graph. This information is useful though to reduce memory whenever gradients are requested for a parameter subset, as is the case in many modern fine-tuning tasks. Specifically, inputs …

abstract arxiv backpropagation computation consumer consumption cs.lg current deep learning differentiation graph implementation information memory memory consumption network neural network observe pytorch tasks type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York