April 22, 2024, 4:43 a.m. | Amirreza Neshaei Moghaddam, Alex Olshevsky, Bahman Gharesifard

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.10851v2 Announce Type: replace-cross
Abstract: We provide the first known algorithm that provably achieves $\varepsilon$-optimality within $\widetilde{\mathcal{O}}(1/\varepsilon)$ function evaluations for the discounted discrete-time LQR problem with unknown parameters, without relying on two-point gradient estimates. These estimates are known to be unrealistic in many settings, as they depend on using the exact same initialization, which is to be selected randomly, for two different policies. Our results substantially improve upon the existing literature outside the realm of two-point gradient estimates, which either …

abstract algorithm arxiv complexity cs.lg cs.sy eess.sy function gradient lens linear math.oc parameters regulator reinforcement reinforcement learning sample type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York