Web: http://arxiv.org/abs/2202.04985

June 17, 2022, 1:12 a.m. | Gergely Neu, Gábor Lugosi

stat.ML updates on arXiv.org arxiv.org

Since the celebrated works of Russo and Zou (2016,2019) and Xu and Raginsky
(2017), it has been well known that the generalization error of supervised
learning algorithms can be bounded in terms of the mutual information between
their input and the output, given that the loss of any fixed hypothesis has a
subgaussian tail. In this work, we generalize this result beyond the standard
choice of Shannon's mutual information to measure the dependence between the
input and the output. Our …

analysis arxiv ml

More from arxiv.org / stat.ML updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY