Feb. 16, 2022, 2:11 a.m. | Tuan Dam, Carlo D'Eramo, Jan Peters, Joni Pajarinen

cs.LG updates on arXiv.org arxiv.org

Monte-Carlo Tree Search (MCTS) is a class of methods for solving complex
decision-making problems through the synergy of Monte-Carlo planning and
Reinforcement Learning (RL). The highly combinatorial nature of the problems
commonly addressed by MCTS requires the use of efficient exploration strategies
for navigating the planning tree and quickly convergent value backup methods.
These crucial problems are particularly evident in recent advances that combine
MCTS with deep neural networks for function approximation. In this work, we
propose two methods for …

ai arxiv exploration monte-carlo perspective search tree value

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training

@ Amazon.com | Cupertino, California, USA