Nov. 11, 2022, 2:12 a.m. | Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer

cs.LG updates on arXiv.org arxiv.org

Large language models have been widely adopted but require significant GPU
memory for inference. We develop a procedure for Int8 matrix multiplication for
feed-forward and attention projection layers in transformers, which cut the
memory needed for inference by half while retaining full precision performance.
With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted
to Int8, and used immediately without performance degradation. This is made
possible by understanding and working around properties of highly systematic
emergent features in …

arxiv llm matrix matrix multiplication scale transformers

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote