Jan. 13, 2024, 5:07 p.m. | /u/FallMindless3563

Machine Learning www.reddit.com

This question came up in our Friday paper club as we read the Mixtral 8x7B paper, and don't feel like we got a satisfying answer.

It seems like the argument for MOE is that you can let certain parts of the network specialize in certain domains or tasks. This also strikes me as a similar argument people make for having multi-head attention within the transformer block. Why would you only put the router in front of the Feed Forward Layers …

attention experts head machinelearning mixtral mixtral 8x7b mixture of experts moe paper question transformers

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US