all AI news
Dynamic Reward Adjustment in Multi-Reward Reinforcement Learning for Counselor Reflection Generation
March 21, 2024, 4:42 a.m. | Do June Min, Veronica Perez-Rosas, Kenneth Resnicow, Rada Mihalcea
cs.LG updates on arXiv.org arxiv.org
Abstract: In this paper, we study the problem of multi-reward reinforcement learning to jointly optimize for multiple text qualities for natural language generation. We focus on the task of counselor reflection generation, where we optimize the generators to simultaneously improve the fluency, coherence, and reflection quality of generated counselor responses. We introduce two novel bandit methods, DynaOpt and C-DynaOpt, which rely on the broad strategy of combining rewards into a single value and optimizing them simultaneously. …
abstract arxiv cs.cl cs.lg dynamic focus generators language language generation multiple natural natural language natural language generation paper reinforcement reinforcement learning study text type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Data Science Analyst
@ Mayo Clinic | AZ, United States
Sr. Data Scientist (Network Engineering)
@ SpaceX | Redmond, WA