Jan. 31, 2024, 4:46 p.m. | Felix Helfenstein, Jannis Blüml, Johannes Czech, Kristian Kersting

cs.LG updates on arXiv.org arxiv.org

This paper presents a new approach that integrates deep learning with
computational chess, using both the Mixture of Experts (MoE) method and
Monte-Carlo Tree Search (MCTS). Our methodology employs a suite of specialized
models, each designed to respond to specific changes in the game's input data.
This results in a framework with sparsely activated models, which provides
significant computational benefits. Our framework combines the MoE method with
MCTS, in order to align it with the strategic phases of chess, thus …

arxiv chess computational cs.lg deep learning experts methodology mixture of experts moe monte-carlo paper search tree

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv