all AI news
Discovering Latent Knowledge in Language Models Without Supervision
March 5, 2024, 2:45 p.m. | Collin Burns, Haotian Ye, Dan Klein, Jacob Steinhardt
cs.LG updates on arXiv.org arxiv.org
Abstract: Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect. We propose circumventing this issue by directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way. Specifically, we introduce a method for …
abstract arxiv cs.ai cs.cl cs.lg errors generate human humans imitation learning knowledge language language models rate supervision text them train training truth type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Lead Data Modeler
@ Sherwin-Williams | Cleveland, OH, United States