March 6, 2024, 5:48 a.m. | Hitesh Golchha, Sahil Yerawar, Dhruvesh Patel, Soham Dan, Keerthiram Murugesan

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.03141v1 Announce Type: new
Abstract: Real-world sequential decision making is characterized by sparse rewards and large decision spaces, posing significant difficulty for experiential learning systems like $\textit{tabula rasa}$ reinforcement learning (RL) agents. Large Language Models (LLMs), with a wealth of world knowledge, can help RL agents learn quickly and adapt to distribution shifts. In this work, we introduce Language Guided Exploration (LGE) framework, which uses a pre-trained language model (called GUIDE ) to provide decision-level guidance to an RL agent …

abstract adapt agents arxiv cs.cl decision decision making environments exploration knowledge language language models large language large language models learn learning systems llms making rasa reinforcement reinforcement learning spaces systems text type wealth world

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer - Sr. Consultant level

@ Visa | Bellevue, WA, United States