May 19, 2022, 1:11 a.m. | Samuel Neumann, Sungsu Lim, Ajin Joseph, Yangchen Pan, Adam White, Martha White

cs.LG updates on arXiv.org arxiv.org

Many policy gradient methods are variants of Actor-Critic (AC), where a value
function (critic) is learned to facilitate updating the parameterized policy
(actor). The update to the actor involves a log-likelihood update weighted by
the action-values, with the addition of entropy regularization for soft
variants. In this work, we explore an alternative update for the actor, based
on an extension of the cross entropy method (CEM) to condition on inputs
(states). The idea is to start with a broader policy …

actor-critic arxiv cross-entropy entropy improvement policy

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne