May 17, 2023, 4:25 p.m. | Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu

Blog Content - TOGETHER www.together.xyz

The mixture proportions of pretraining data domains (e.g., Wikipedia,
books, web text) greatly affect language model (LM) performance. In this
paper, we propose Domain Reweighting with Minimax Optimization (DoReMi),
which first trains a small proxy model using group distributionally robust
optimization (Group DRO) over domains to produce domain weights (mixture
proportions) without knowledge of downstream tasks.

books data language language model minimax optimization paper performance research small text trains web wikipedia

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer - New Graduate

@ Applied Materials | Milan,ITA

Lead Machine Learning Scientist

@ Biogen | Cambridge, MA, United States