all AI news
[D] Vision Mamba Strikes Again! Is the Transformer Throne Crumbling?
Jan. 24, 2024, 11:06 a.m. | /u/Instantinopaul
Machine Learning www.reddit.com
Their new model, Vision Mamba, ditches the self-attention craze and leans on state space magic. The result? Performance on par with top vision transformers (DeiT) like, but with better efficiency!
This might be a game-changer, folks. We're talking faster, lighter models that can run on your grandma's laptop, but still see like a hawk.
Any thoughts? I am …
attention computer computer vision machinelearning magic mamba nlp performance pixels self-attention space state strikes transformer transformers vision vision transformers
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Senior Data Scientist
@ Highmark Health | PA, Working at Home - Pennsylvania
Principal Data Scientist
@ Warner Bros. Discovery | CA San Francisco 153 Kearny Street