all AI news
Unraveling the Mystery of Scaling Laws: Part I
March 12, 2024, 4:42 a.m. | Hui Su, Zhi Tian, Xiaoyu Shen, Xunliang Cai
cs.LG updates on arXiv.org arxiv.org
Abstract: Scaling law principles indicate a power-law correlation between loss and variables such as model size, dataset size, and computational resources utilized during training. These principles play a vital role in optimizing various aspects of model pre-training, ultimately contributing to the success of large language models such as GPT-4, Llama and Gemini. However, the original scaling law paper by OpenAI did not disclose the complete details necessary to derive the precise scaling law formulas, and their …
abstract arxiv computational correlation cs.cl cs.lg dataset language language models large language large language models law laws loss part power power-law pre-training resources role scaling scaling law success training type variables vital
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Scientist
@ ITE Management | New York City, United States