all AI news
Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
April 24, 2024, 10:11 p.m. | Yannic Kilcher
Yannic Kilcher www.youtube.com
Paper: https://arxiv.org/abs/2404.07143
Abstract:
This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term linear attention mechanisms in a single Transformer block. We demonstrate the …
abstract attention computation context google inputs key language language models large language large language models llms memory researchers scale transformer transformers via work
More from www.youtube.com / Yannic Kilcher
[ML News] Chips, Robots, and Models
3 days, 19 hours ago |
www.youtube.com
TransformerFAM: Feedback attention is working memory
5 days, 17 hours ago |
www.youtube.com
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne