April 8, 2024, 4:42 a.m. | Noah Golowich, Ankur Moitra, Dhruv Rohatgi

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.03774v1 Announce Type: new
Abstract: Supervised learning is often computationally easy in practice. But to what extent does this mean that other modes of learning, such as reinforcement learning (RL), ought to be computationally easy by extension? In this work we show the first cryptographic separation between RL and supervised learning, by exhibiting a class of block MDPs and associated decoding functions where reward-free exploration is provably computationally harder than the associated regression problem. We also show that there is …

abstract arxiv cs.cc cs.cr cs.ds cs.lg easy exploration extension mean practice prediction reinforcement reinforcement learning show supervised learning type work

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571