all AI news
DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
May 17, 2023, 4:25 p.m. | Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu
Blog Content - TOGETHER www.together.xyz
books, web text) greatly affect language model (LM) performance. In this
paper, we propose Domain Reweighting with Minimax Optimization (DoReMi),
which first trains a small proxy model using group distributionally robust
optimization (Group DRO) over domains to produce domain weights (mixture
proportions) without knowledge of downstream tasks.
books data language language model minimax optimization paper performance research small text trains web wikipedia
More from www.together.xyz / Blog Content - TOGETHER
Flash-Decoding for long-context inference
6 months, 2 weeks ago |
www.together.xyz
Faster inference enables up to 5x price reduction on Together API
8 months, 2 weeks ago |
www.together.xyz
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Intelligence Manager
@ Sanofi | Budapest
Principal Engineer, Data (Hybrid)
@ Homebase | Toronto, Ontario, Canada