all AI news
Tracing Knowledge in Language Models Back to the Training Data. (arXiv:2205.11482v2 [cs.CL] UPDATED)
May 25, 2022, 1:12 a.m. | Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu
cs.CL updates on arXiv.org arxiv.org
Neural language models (LMs) have been shown to memorize a great deal of
factual knowledge. But when an LM generates an assertion, it is often difficult
to determine where it learned this information and whether it is true. In this
paper, we introduce a new benchmark for fact tracing: tracing language models'
assertions back to the training examples that provided evidence for those
predictions. Prior work has suggested that dataset-level influence methods
might offer an effective framework for tracing predictions …
arxiv data knowledge language language models tracing training training data
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analytics & Insight Specialist, Customer Success
@ Fortinet | Ottawa, ON, Canada
Account Director, ChatGPT Enterprise - Majors
@ OpenAI | Remote - Paris