March 19, 2024, 4:41 a.m. | Yudong Luo, Yangchen Pan, Han Wang, Philip Torr, Pascal Poupart

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.11062v1 Announce Type: new
Abstract: Reinforcement learning algorithms utilizing policy gradients (PG) to optimize Conditional Value at Risk (CVaR) face significant challenges with sample inefficiency, hindering their practical applications. This inefficiency stems from two main facts: a focus on tail-end performance that overlooks many sampled trajectories, and the potential of gradient vanishing when the lower tail of the return distribution is overly flat. To address these challenges, we propose a simple mixture policy parameterization. This method integrates a risk-neutral policy …

abstract algorithms applications arxiv challenges cs.lg efficiency face facts focus math.oc optimization performance policy practical reinforcement reinforcement learning risk sample simple type value

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote