Feb. 23, 2024, 10:13 p.m. | /u/PaleAle34

Machine Learning www.reddit.com

Hi everyone,

I'm diving into my Master's Thesis and need some guidance. The core of my work is to linearize a complex function that's riddled with memory effects. While the Transformer architecture has been explored in literature, I'm considering taking a fresh angle with either a Mamba architecture or spicing up the Transformer with a MoE (Mixture of Experts) approach. Moe-Mamba is also on the table.

The thing is: it's the first time I'm actually working with these architectures, so …

advice architecture core effects function guidance literature machinelearning mamba master memory moe thesis transformer transformer architecture work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne