July 22, 2022, 1:10 a.m. | Yi Tay, Mostafa Dehghani, Samira Abnar, Hyung Won Chung, William Fedus, Jinfeng Rao, Sharan Narang, Vinh Q. Tran, Dani Yogatama, Donald Metzler

cs.LG updates on arXiv.org arxiv.org

There have been a lot of interest in the scaling properties of Transformer
models. However, not much has been done on the front of investigating the
effect of scaling properties of different inductive biases and model
architectures. Do model architectures scale differently? If so, how does
inductive bias affect scaling behaviour? How does this influence upstream
(pretraining) and downstream (transfer)? This paper conducts a systematic study
of scaling behaviour of ten diverse model architectures such as Transformers,
Switch Transformers, Universal …

arxiv bias inductive influence laws lg scaling

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States