March 12, 2024, 4:42 a.m. | Hui Su, Zhi Tian, Xiaoyu Shen, Xunliang Cai

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.06563v1 Announce Type: new
Abstract: Scaling law principles indicate a power-law correlation between loss and variables such as model size, dataset size, and computational resources utilized during training. These principles play a vital role in optimizing various aspects of model pre-training, ultimately contributing to the success of large language models such as GPT-4, Llama and Gemini. However, the original scaling law paper by OpenAI did not disclose the complete details necessary to derive the precise scaling law formulas, and their …

abstract arxiv computational correlation cs.cl cs.lg dataset language language models large language large language models law laws loss part power power-law pre-training resources role scaling scaling law success training type variables vital

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States