Feb. 26, 2024, 5:41 a.m. | Victoria Lin, Eli Ben-Michael, Louis-Philippe Morency

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.14979v1 Announce Type: new
Abstract: As large language models (LLMs) see greater use in academic and commercial settings, there is increasing interest in methods that allow language models to generate texts aligned with human preferences. In this paper, we present an initial exploration of language model optimization for human preferences from direct outcome datasets, where each sample consists of a text and an associated numerical outcome measuring the reader's response. We first propose that language model optimization should be viewed …

abstract academic arxiv causal inference commercial cs.cl cs.lg exploration generate human inference language language model language models large language large language models llms model optimization optimization paper stat.me type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote