all AI news
Characterizing the Accuracy - Efficiency Trade-off of Low-rank Decomposition in Language Models
May 13, 2024, 4:42 a.m. | Chakshu Moar, Michael Pellauer, Hyoukjun Kwon
cs.LG updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) have emerged and presented their general problem-solving capabilities with one model. However, the model size has increased dramatically with billions of parameters to enable such broad problem-solving capabilities. In addition, due to the dominance of matrix-matrix and matrix-vector multiplications in LLMs, the compute-to-model size ratio is significantly lower than that of CNNs. This shift pushes LLMs from a computation-bound regime to a memory-bound regime. Therefore, optimizing the memory footprint and traffic …
abstract accuracy arxiv capabilities cs.cl cs.lg efficiency general however language language models large language large language models llms low matrix one model parameters problem-solving trade trade-off type vector
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Werkstudent Data Architecture & Governance (w/m/d)
@ E.ON | Essen, DE
Data Architect, Data Lake, Professional Services
@ Amazon.com | Bogota, DC, COL
Data Architect, Data Lake, Professional Services
@ Amazon.com | Buenos Aires City, Buenos Aires Autonomous City, ARG
Data Architect
@ Bitful | United States - Remote