May 17, 2023, 4:25 p.m. | Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu

Blog Content - TOGETHER www.together.xyz

The mixture proportions of pretraining data domains (e.g., Wikipedia,
books, web text) greatly affect language model (LM) performance. In this
paper, we propose Domain Reweighting with Minimax Optimization (DoReMi),
which first trains a small proxy model using group distributionally robust
optimization (Group DRO) over domains to produce domain weights (mixture
proportions) without knowledge of downstream tasks.

books data language language model minimax optimization paper performance research small text trains web wikipedia

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV