Feb. 3, 2024, 9:58 p.m. | /u/we_are_mammals

Machine Learning www.reddit.com

[https://arxiv.org/abs/2211.08411](https://arxiv.org/abs/2211.08411)



**Abstract:**

The Internet contains a wealth of knowledge -- from the birthdays of historical figures to tutorials on how to code -- all of which may be learned by language models. However, while certain pieces of information are ubiquitous on the web, others appear extremely rarely. In this paper, we study the relationship between the knowledge memorized by large language models and the information in pre-training datasets scraped from the web. In particular, we show that a language …

abstract code information internet knowledge language language models machinelearning paper relationship study tutorials wealth web

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN