Feb. 6, 2024, 5:44 a.m. | Kolby Nottingham Bodhisattwa Prasad Majumder Bhavana Dalvi Mishra Sameer Singh Peter Clark Roy Fox

cs.LG updates on arXiv.org arxiv.org

Large language models (LLMs) have recently been used for sequential decision making in interactive environments. However, leveraging environment reward signals for continual LLM actor improvement is not straightforward. We propose Skill Set Optimization (SSO) for improving LLM actor performance through constructing and refining sets of transferable skills. SSO constructs skills by extracting common subtrajectories with high rewards and generating subgoals and instructions to represent each skill. These skills are provided to the LLM actor in-context to reinforce behaviors with high …

actor behavior continual cs.cl cs.lg decision decision making environment environments improvement interactive language language model language models large language large language models llm llms making model behavior optimization performance set skills through via

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US