Dec. 1, 2023, 11:16 p.m. | /u/FallMindless3563

Machine Learning www.reddit.com

We have a reading club every Friday called Arxiv Dives where we go over the fundamentals of a lot of the state of the art techniques used in Machine Learning today. Last week we dove into the "Vision Transformers" Paper from 2021 where the Google Brain team benchmarked training large scale transformers against ResNets.

Though it is not groundbreaking research as of this week, I think with the pace of AI it is important to dive deep into past work …

art arxiv brain every fundamentals google google brain machine machine learning machinelearning paper reading state state of the art team transformer transformers vision vision transformers vit

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne