March 26, 2024, 4:43 a.m. | Zhengxiao Du, Aohan Zeng, Yuxiao Dong, Jie Tang

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.15796v1 Announce Type: cross
Abstract: Recent studies have put into question the belief that emergent abilities in language models are exclusive to large models. This skepticism arises from two observations: 1) smaller models can also exhibit high performance on emergent abilities and 2) there is doubt on the discontinuous metrics used to measure these abilities. In this paper, we propose to study emergent abilities in the lens of pre-training loss, instead of model size or training compute. We demonstrate that …

abstract arxiv belief cs.ai cs.cl cs.lg exclusive language language models large models loss metrics performance perspective question skepticism studies type understanding

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States