all AI news
Easy as ABCs: Unifying Boltzmann Q-Learning and Counterfactual Regret Minimization
Feb. 20, 2024, 5:42 a.m. | Luca D'Amico-Wong, Hugh Zhang, Marc Lanctot, David C. Parkes
cs.LG updates on arXiv.org arxiv.org
Abstract: We propose ABCs (Adaptive Branching through Child stationarity), a best-of-both-worlds algorithm combining Boltzmann Q-learning (BQL), a classic reinforcement learning algorithm for single-agent domains, and counterfactual regret minimization (CFR), a central algorithm for learning in multi-agent domains. ABCs adaptively chooses what fraction of the environment to explore each iteration by measuring the stationarity of the environment's reward and transition dynamics. In Markov decision processes, ABCs converges to the optimal policy with at most an O(A) factor …
abstract agent algorithm arxiv boltzmann child counterfactual cs.gt cs.lg cs.ma domains easy environment multi-agent q-learning reinforcement reinforcement learning the environment through type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN