all AI news
Sparse LLMs at inference: 6x faster transformers! | DEJAVU paper explained
Feb. 3, 2024, 12:51 p.m. | AI Coffee Break with Letitia
AI Coffee Break with Letitia www.youtube.com
📜 Liu, Zichang, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava et al. "Deja vu: Contextual sparsity for efficient llms at inference time." In International Conference on Machine Learning, pp. 22137-22176. PMLR, 2023. https://arxiv.org/abs/2310.17157
Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Dres. Trost GbR, Siltax, Vignesh …
explained faster inference information llm llms paper sparsity support transformers video
More from www.youtube.com / AI Coffee Break with Letitia
Stealing Part of a Production LLM | API protect LLMs no more
3 weeks, 2 days ago |
www.youtube.com
MAMBA and State Space Models explained | SSM explained
2 months, 2 weeks ago |
www.youtube.com
Why is DALL-E 3 better at following Text Prompts? — DALL-E 3 explained
5 months, 3 weeks ago |
www.youtube.com
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-
@ JPMorgan Chase & Co. | Wilmington, DE, United States
Senior ML Engineer (Speech/ASR)
@ ObserveAI | Bengaluru