April 4, 2024, 9 p.m. | /u/darthjaja6

Deep Learning www.reddit.com

I'm quite familiar with transformer and RNN. I can write both from scratch, and I have no problem understanding the papers behind them.

Recently I start to reading things that attacks the quadratic inference drawback of transformer, starting from Mamba. I mean most of the stuff, alone, look obvious or familiar to me. But after reading the paper and a few youtube videos, I still don't feel getting it. Take the paper as an example, I know that the SSM …

attacks deeplearning inference look mamba mean papers reading rnn scratch them transformer understanding

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training

@ Amazon.com | Cupertino, California, USA