March 29, 2024, 4:43 a.m. | Jonathan Cola\c{c}o Carr, Prakash Panangaden, Doina Precup

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.01990v2 Announce Type: replace
Abstract: Learning from Preferential Feedback (LfPF) plays an essential role in training Large Language Models, as well as certain types of interactive learning agents. However, a substantial gap exists between the theory and application of LfPF algorithms. Current results guaranteeing the existence of optimal policies in LfPF problems assume that both the preferences and transition dynamics are determined by a Markov Decision Process. We introduce the Direct Preference Process, a new framework for analyzing LfPF problems …

abstract agents algorithms application arxiv cs.lg current feedback gap however interactive language language models large language large language models policies relations results role theory training type types

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA