all AI news
[D] Vision Mamba Strikes Again! Is the Transformer Throne Crumbling?
Jan. 24, 2024, 11:06 a.m. | /u/Instantinopaul
Machine Learning www.reddit.com
Their new model, Vision Mamba, ditches the self-attention craze and leans on state space magic. The result? Performance on par with top vision transformers (DeiT) like, but with better efficiency!
This might be a game-changer, folks. We're talking faster, lighter models that can run on your grandma's laptop, but still see like a hawk.
Any thoughts? I am …
attention computer computer vision machinelearning magic mamba nlp performance pixels self-attention space state strikes transformer transformers vision vision transformers
More from www.reddit.com / Machine Learning
[P] Real Time Emotion Classification with FER-2013 dataset
1 day, 10 hours ago |
www.reddit.com
[D] Real chances to be accepted in NeurIPS 2024 - Other conferences
1 day, 14 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US