all AI news
Efficient Streaming Language Models with Attention Sinks (Paper Explained)
Oct. 14, 2023, 9:43 p.m. | Yannic Kilcher
Yannic Kilcher www.youtube.com
How does one run inference for a generative autoregressive language model that has been trained with a fixed context size? Streaming LLMs combine the performance of windowed attention, but avoid the drop in performance by using attention sinks - an interesting phenomenon where the token at position 0 acts as an absorber of "extra" attention.
OUTLINE:
0:00 - Introduction
1:20 - What is the problem?
10:30 - The hypothesis: Attention Sinks
15:10 - Experimental evidence
18:45 - …
attention chatgpt context explained generative inference language language model language models llm llms paper performance streaming token
More from www.youtube.com / Yannic Kilcher
[ML News] Chips, Robots, and Models
2 weeks, 5 days ago |
www.youtube.com
[ML News] Llama 3 changes the game
3 weeks, 5 days ago |
www.youtube.com
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US