all AI news
The "Memory-efficient NLLB-200: Language-specific Expert Pruning" paper: Is gate statistics available anywhere?
June 10, 2023, 10:32 p.m. | /u/BT_Uytya
Natural Language Processing www.reddit.com
> We will release the ids of the pruned experts, along with other experts’ gathered statistics so that anyone with a single 32GB GPU can use the NLLB-200 model at inference.
...
> All gate statistics will be openly released to foster future research.
To the best of my knowledge, no such data (or code) is available anywhere. Should I wait for the paper to be accepted for publication? What's the best way to …
best of code data experts future gpu inference knowledge languagetechnology nllb nllb-200 release research statistics
More from www.reddit.com / Natural Language Processing
Do I need graph database for this Entity Linking problem?
2 days, 13 hours ago |
www.reddit.com
Can LLMs Consistently Deliver Comedy?
1 week, 1 day ago |
www.reddit.com
Topic modeling with short sentences
1 week, 1 day ago |
www.reddit.com
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US