Dec. 8, 2023, 8:49 p.m. | /u/tsengalb99

Machine Learning www.reddit.com

We're pleased to introduce QuIP#, a new SOTA LLM quantization method that uses incoherence processing from QuIP (the paper) & lattices to achieve 2 bit LLMs with near-fp16 performance! Now you can run LLaMA 2 70B on a 24G GPU w/out offloading!

QuIP# crushes all publicly available 2 bit PTQ methods on language modeling & zero shot tasks while being conceptually clean and simple. We’ve released quantized LLaMA, Mistral, and OpenHermes models, and a full codebase at [https://github.com/Cornell-RelaxML/quip-sharp](https://github.com/Cornell-RelaxML/quip-sharp)

More information …

fp16 gpu llama llama 2 llm llms machinelearning near paper performance processing quantization quip sota

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer

@ Apple | Sunnyvale, California, United States