all AI news
[R] Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding - Carnegie Mellon University 2024 - Allows running an unquantized Llama2-70B on an RTX4090 with half-second per token latency!
March 13, 2024, 4:35 p.m. | /u/Singularian2501
Machine Learning www.reddit.com
Github: [https://github.com/Infini-AI-Lab/Sequoia/tree/main](https://github.com/Infini-AI-Lab/Sequoia/tree/main)
Abstract:
>As the usage of large language models (LLMs) grows, performing efficient inference with these models becomes increasingly important. While speculative decoding has recently emerged as a promising direction for speeding up inference, existing methods are limited in their ability to scale to larger speculation budgets, and adapt to different hyperparameters and hardware. This paper introduces Sequoia, a scalable, robust, and hardware-aware algorithm for speculative decoding. To attain better scalability, Sequoia introduces a dynamic programming algorithm …
abstract adapt budgets decoding hardware inference language language models large language large language models llms machinelearning paper robust scalable scale sequoia speculation usage
More from www.reddit.com / Machine Learning
[R] Lipreading with LipNet: End-to-End Sentence-level Lipreading
1 day, 3 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Security Data Engineer
@ ASML | Veldhoven, Building 08, Netherlands
Data Engineer
@ Parsons Corporation | Pune - Business Bay
Data Engineer
@ Parsons Corporation | Bengaluru, Velankani Tech Park