June 7, 2022, 1:10 a.m. | Clare Lyle, Mark Rowland, Will Dabney, Marta Kwiatkowska, Yarin Gal

cs.LG updates on arXiv.org arxiv.org

Solving a reinforcement learning (RL) problem poses two competing challenges:
fitting a potentially discontinuous value function, and generalizing well to
new observations. In this paper, we analyze the learning dynamics of temporal
difference algorithms to gain novel insight into the tension between these two
objectives. We show theoretically that temporal difference learning encourages
agents to fit non-smooth components of the value function early in training,
and at the same time induces the second-order effect of discouraging
generalization. We corroborate these …

arxiv dynamics learning reinforcement reinforcement learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Strategy & Management - Private Equity Sector - Manager - Consulting - Location OPEN

@ EY | New York City, US, 10001-8604

Data Engineer- People Analytics

@ Volvo Group | Gothenburg, SE, 40531