March 15, 2024, 4:41 a.m. | Haoxing Tian, Ioannis Ch. Paschalidis, Alex Olshevsky

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.08896v1 Announce Type: new
Abstract: We consider a distributed setup for reinforcement learning, where each agent has a copy of the same Markov Decision Process but transitions are sampled from the corresponding Markov chain independently by each agent. We show that in this setting, we can achieve a linear speedup for TD($\lambda$), a family of popular methods for policy evaluation, in the sense that $N$ agents can evaluate a policy $N$ times faster provided the target accuracy is small enough. …

abstract agent arxiv copy cs.lg decision distributed lambda linear markov process reinforcement reinforcement learning sampling setup show transitions type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne