April 21, 2024, 2 p.m. | code_your_own_AI

code_your_own_AI www.youtube.com

A brand new Language Model Architecture: RecurrentLLM. Moving Past Transformers.
Google developed RecurrentGemma-2B and compares this new LM architecture (!) with the classical transformer based, quadratic complexity of a self-attention Gemma 2B. And the new throughput is: about 6000 tokens per second.

Introduction and Model Architecture:
The original paper by Google introduces "RecurrentGemma-2B," leveraging the Griffin architecture, which moves away from traditional global attention mechanisms in favor of a combination of linear recurrences and local attention. This design enables the …

architecture attention brand complexity context gemma gen gen ai google introduction language language model moving next next-gen paper per self-attention tokens transformer transformers

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US