Oct. 14, 2022, 1:13 a.m. | Maxime Haddouche, Benjamin Guedj

cs.LG updates on arXiv.org arxiv.org

Most PAC-Bayesian bounds hold in the batch learning setting where data is
collected at once, prior to inference or prediction. This somewhat departs from
many contemporary learning problems where data streams are collected and the
algorithms must dynamically adjust. We prove new PAC-Bayesian bounds in this
online learning framework, leveraging an updated definition of regret, and we
revisit classical PAC-Bayesian results with a batch-to-online conversion,
extending their remit to the case of dependent data. Our results hold for
bounded losses, …

arxiv bayes

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote