all AI news
[D] Question about Mixture of Experts in Transformers - Has anyone tried adding the router before the Multi Head Attention Blocks?
Jan. 13, 2024, 5:07 p.m. | /u/FallMindless3563
Machine Learning www.reddit.com
It seems like the argument for MOE is that you can let certain parts of the network specialize in certain domains or tasks. This also strikes me as a similar argument people make for having multi-head attention within the transformer block. Why would you only put the router in front of the Feed Forward Layers …
attention experts head machinelearning mixtral mixtral 8x7b mixture of experts moe paper question transformers
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US