Oct. 14, 2022, 1:14 a.m. | Maxime Haddouche, Benjamin Guedj

stat.ML updates on arXiv.org arxiv.org

Most PAC-Bayesian bounds hold in the batch learning setting where data is
collected at once, prior to inference or prediction. This somewhat departs from
many contemporary learning problems where data streams are collected and the
algorithms must dynamically adjust. We prove new PAC-Bayesian bounds in this
online learning framework, leveraging an updated definition of regret, and we
revisit classical PAC-Bayesian results with a batch-to-online conversion,
extending their remit to the case of dependent data. Our results hold for
bounded losses, …

arxiv bayes

Senior Marketing Data Analyst

@ Amazon.com | Amsterdam, North Holland, NLD

Senior Data Analyst

@ MoneyLion | Kuala Lumpur, Kuala Lumpur, Malaysia

Data Management Specialist - Office of the CDO - Chase- Associate

@ JPMorgan Chase & Co. | LONDON, LONDON, United Kingdom

BI Data Analyst

@ Nedbank | Johannesburg, ZA

Head of Data Science and Artificial Intelligence (m/f/d)

@ Project A Ventures | Munich, Germany

Senior Data Scientist - GenAI

@ Roche | Hyderabad RSS