Oct. 14, 2023, 9:43 p.m. | Yannic Kilcher

Yannic Kilcher www.youtube.com

#llm #ai #chatgpt

How does one run inference for a generative autoregressive language model that has been trained with a fixed context size? Streaming LLMs combine the performance of windowed attention, but avoid the drop in performance by using attention sinks - an interesting phenomenon where the token at position 0 acts as an absorber of "extra" attention.

OUTLINE:
0:00 - Introduction
1:20 - What is the problem?
10:30 - The hypothesis: Attention Sinks
15:10 - Experimental evidence
18:45 - …

attention chatgpt context explained generative inference language language model language models llm llms paper performance streaming token

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne