all AI news
Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws
April 9, 2024, 4:43 a.m. | Zeyuan Allen-Zhu, Yuanzhi Li
cs.LG updates on arXiv.org arxiv.org
Abstract: Scaling laws describe the relationship between the size of language models and their capabilities. Unlike prior studies that evaluate a model's capability via loss or benchmarks, we estimate the number of knowledge bits a model stores. We focus on factual knowledge represented as tuples, such as (USA, capital, Washington D.C.) from a Wikipedia page. Through multiple controlled datasets, we establish that language models can and only can store 2 bits of knowledge per parameter, even …
abstract arxiv benchmarks capabilities capability capacity cs.ai cs.cl cs.lg focus knowledge language language models laws loss part physics prior relationship scaling stores studies tuples type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (Digital Business Analyst)
@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore