July 22, 2022, 1:11 a.m. | Yi Tay, Mostafa Dehghani, Samira Abnar, Hyung Won Chung, William Fedus, Jinfeng Rao, Sharan Narang, Vinh Q. Tran, Dani Yogatama, Donald Metzler

cs.CL updates on arXiv.org arxiv.org

There have been a lot of interest in the scaling properties of Transformer
models. However, not much has been done on the front of investigating the
effect of scaling properties of different inductive biases and model
architectures. Do model architectures scale differently? If so, how does
inductive bias affect scaling behaviour? How does this influence upstream
(pretraining) and downstream (transfer)? This paper conducts a systematic study
of scaling behaviour of ten diverse model architectures such as Transformers,
Switch Transformers, Universal …

arxiv bias inductive influence laws lg scaling

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Technology Consultant Master Data Management (w/m/d)

@ SAP | Walldorf, DE, 69190

Research Engineer, Computer Vision, Google Research

@ Google | Nairobi, Kenya