Sept. 2, 2022, 1:12 a.m. | Tal Lancewicki, Aviv Rosenberg, Yishay Mansour

cs.LG updates on arXiv.org arxiv.org

We study cooperative online learning in stochastic and adversarial Markov
decision process (MDP). That is, in each episode, $m$ agents interact with an
MDP simultaneously and share information in order to minimize their individual
regret. We consider environments with two types of randomness: \emph{fresh} --
where each agent's trajectory is sampled i.i.d, and \emph{non-fresh} -- where
the realization is shared by all agents (but each agent's trajectory is also
affected by its own actions). More precisely, with non-fresh randomness the …

arxiv learning online learning stochastic

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US