July 19, 2023, 2:04 p.m. | /u/Entire-Plane2795

Machine Learning www.reddit.com

So very recently, a new paper was published to ArXiV called "Retentive Network: A Successor to Transformer for Large Language Models": [https://arxiv.org/abs/2307.08621](https://arxiv.org/abs/2307.08621). The title makes a fairly strong claim regarding the success of the model: transformers have long been established as among the best general-purpose learning techniques in the deep learning literature. Self-describing as a "successor to transformer" is therefore not to be taken lightly.

From what I can tell, the math checks out, and the authors demonstrate an intriguing …

attention authors checks core inference machinelearning math memory requirements retention transformer transformers

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Digital Business Analyst)

@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore