all AI news
Transformer tricks: Precomputing the first layer
Feb. 22, 2024, 5:41 a.m. | Nils Graef
cs.LG updates on arXiv.org arxiv.org
Abstract: This short paper describes a trick to speed up inference of transformers with RoPE (such as LLaMA, Mistral, and PaLM). For these models, a large portion of the first transformer layer can be precomputed, which results in slightly lower latency and lower cost-per-token. Because this trick optimizes only one layer, the relative savings depend on the total number of layers. For example, the maximum savings for a model with only 4 layers (such as Whisper …
abstract arxiv cost cs.lg inference latency layer llama mistral palm paper per rope speed token transformer transformers trick tricks type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
GN SONG MT Market Research Data Analyst 11
@ Accenture | Bengaluru, BDC7A
GN SONG MT Market Research Data Analyst 09
@ Accenture | Bengaluru, BDC7A