Aug. 23, 2022, 1:13 a.m. | Gen Li, Yuejie Chi, Yuting Wei, Yuxin Chen

stat.ML updates on arXiv.org arxiv.org

This paper is concerned with two-player zero-sum Markov games -- arguably the
most basic setting in multi-agent reinforcement learning -- with the goal of
learning a Nash equilibrium (NE) sample-optimally. All prior results suffer
from at least one of the two obstacles: the curse of multiple agents and the
barrier of long horizon, regardless of the sampling protocol in use. We take a
step towards settling this problem, assuming access to a flexible sampling
mechanism: the generative model. Focusing on …

arxiv games lg markov minimax rl

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York