June 6, 2024, 4:43 a.m. | Imad Aouali, Victor-Emmanuel Brunel, David Rohde, Anna Korba

cs.LG updates on arXiv.org arxiv.org

arXiv:2406.03434v1 Announce Type: new
Abstract: Off-policy learning (OPL) often involves minimizing a risk estimator based on importance weighting to correct bias from the logging policy used to collect data. However, this method can produce an estimator with a high variance. A common solution is to regularize the importance weights and learn the policy by minimizing an estimator with penalties derived from generalization bounds specific to the estimator. This approach, known as pessimism, has gained recent attention but lacks a unified …

abstract arxiv bayesian bias cs.ai cs.lg data estimator however importance logging offline policy risk sampling solution stat.ml study type variance

Senior Data Engineer

@ Displate | Warsaw

Junior Data Analyst - ESG Data

@ Institutional Shareholder Services | Mumbai

Intern Data Driven Development in Sensor Fusion for Autonomous Driving (f/m/x)

@ BMW Group | Munich, DE

Senior MLOps Engineer, Machine Learning Platform

@ GetYourGuide | Berlin

Data Engineer, Analytics

@ Meta | Menlo Park, CA

Data Engineer

@ Meta | Menlo Park, CA