all AI news
Energy-efficiency Limits on Training AI Systems using Learning-in-Memory
Feb. 26, 2024, 5:41 a.m. | Zihao Chen, Johannes Leugering, Gert Cauwenberghs, Shantanu Chakrabartty
cs.LG updates on arXiv.org arxiv.org
Abstract: Learning-in-memory (LIM) is a recently proposed paradigm to overcome fundamental memory bottlenecks in training machine learning systems. While compute-in-memory (CIM) approaches can address the so-called memory-wall (i.e. energy dissipated due to repeated memory read access) they are agnostic to the energy dissipated due to repeated memory writes at the precision required for training (the update-wall), and they don't account for the energy dissipated when transferring information between short-term and long-term memories (the consolidation-wall). The LIM …
abstract ai systems arxiv bottlenecks compute cs.ai cs.ar cs.lg efficiency energy in-memory learning systems machine machine learning memory paradigm systems training training ai type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York