all AI news
Policy Gradient Methods Find the Nash Equilibrium in N-player General-sum Linear-quadratic Games. (arXiv:2107.13090v2 [math.OC] UPDATED)
Aug. 16, 2022, 1:12 a.m. | Ben Hambly, Renyuan Xu, Huining Yang
stat.ML updates on arXiv.org arxiv.org
We consider a general-sum N-player linear-quadratic game with stochastic
dynamics over a finite horizon and prove the global convergence of the natural
policy gradient method to the Nash equilibrium. In order to prove the
convergence of the method, we require a certain amount of noise in the system.
We give a condition, essentially a lower bound on the covariance of the noise
in terms of the model parameters, in order to guarantee convergence. We
illustrate our results with numerical experiments …
arxiv equilibrium games general gradient linear math nash equilibrium policy
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote