Nov. 8, 2022, 2:12 a.m. | Balazs Varga, Balazs Kulcsar, Morteza Haghir Chehreghani

cs.LG updates on arXiv.org arxiv.org

In this paper, we place deep Q-learning into a control-oriented perspective
and study its learning dynamics with well-established techniques from robust
control. We formulate an uncertain linear time-invariant model by means of the
neural tangent kernel to describe learning. We show the instability of learning
and analyze the agent's behavior in frequency-domain. Then, we ensure
convergence via robust controllers acting as dynamical rewards in the loss
function. We synthesize three controllers: state-feedback gain scheduling H2,
dynamic Hinf, and constant gain …

arxiv control q-learning

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Digital Business Analyst)

@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore