all AI news
Model-based Offline Quantum Reinforcement Learning
April 17, 2024, 4:42 a.m. | Simon Eisenmann, Daniel Hein, Steffen Udluft, Thomas A. Runkler
cs.LG updates on arXiv.org arxiv.org
Abstract: This paper presents the first algorithm for model-based offline quantum reinforcement learning and demonstrates its functionality on the cart-pole benchmark. The model and the policy to be optimized are each implemented as variational quantum circuits. The model is trained by gradient descent to fit a pre-recorded data set. The policy is optimized with a gradient-free optimization scheme using the return estimate given by the model as the fitness function. This model-based approach allows, in principle, …
abstract algorithm arxiv benchmark cart circuits cs.ai cs.lg data data set gradient offline paper policy quant-ph quantum reinforcement reinforcement learning set type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
MLOps Engineer - Hybrid Intelligence
@ Capgemini | Madrid, M, ES
Analista de Business Intelligence (Industry Insights)
@ NielsenIQ | Cotia, Brazil